Jan 28 11:22:03 crc systemd[1]: Starting Kubernetes Kubelet... Jan 28 11:22:03 crc restorecon[4763]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:03 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 11:22:04 crc restorecon[4763]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 11:22:04 crc restorecon[4763]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 28 11:22:04 crc kubenswrapper[4804]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 11:22:04 crc kubenswrapper[4804]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 28 11:22:04 crc kubenswrapper[4804]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 11:22:04 crc kubenswrapper[4804]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 11:22:04 crc kubenswrapper[4804]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 28 11:22:04 crc kubenswrapper[4804]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.706124 4804 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713111 4804 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713141 4804 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713174 4804 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713220 4804 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713301 4804 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713314 4804 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713358 4804 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713368 4804 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713376 4804 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713384 4804 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713392 4804 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713434 4804 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713442 4804 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713450 4804 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713458 4804 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713465 4804 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713475 4804 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713485 4804 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713502 4804 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713510 4804 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713518 4804 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713528 4804 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713537 4804 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713546 4804 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713554 4804 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713563 4804 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713571 4804 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713580 4804 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713589 4804 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713598 4804 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713606 4804 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713614 4804 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713625 4804 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713635 4804 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713643 4804 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713652 4804 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713659 4804 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713667 4804 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713675 4804 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713682 4804 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713691 4804 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713698 4804 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713706 4804 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713714 4804 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713723 4804 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713731 4804 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713738 4804 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713746 4804 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713754 4804 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713762 4804 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713770 4804 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713778 4804 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713785 4804 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713793 4804 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713801 4804 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713808 4804 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713815 4804 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713823 4804 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713832 4804 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713841 4804 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713851 4804 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713863 4804 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713871 4804 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713904 4804 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713935 4804 feature_gate.go:330] unrecognized feature gate: Example Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713943 4804 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713951 4804 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713959 4804 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713966 4804 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713974 4804 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.713982 4804 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714141 4804 flags.go:64] FLAG: --address="0.0.0.0" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714157 4804 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714177 4804 flags.go:64] FLAG: --anonymous-auth="true" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714191 4804 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714204 4804 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714216 4804 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714232 4804 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714248 4804 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714261 4804 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714270 4804 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714281 4804 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714291 4804 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714300 4804 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714309 4804 flags.go:64] FLAG: --cgroup-root="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714319 4804 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714328 4804 flags.go:64] FLAG: --client-ca-file="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714337 4804 flags.go:64] FLAG: --cloud-config="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714348 4804 flags.go:64] FLAG: --cloud-provider="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714358 4804 flags.go:64] FLAG: --cluster-dns="[]" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714368 4804 flags.go:64] FLAG: --cluster-domain="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714377 4804 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714386 4804 flags.go:64] FLAG: --config-dir="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714395 4804 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714405 4804 flags.go:64] FLAG: --container-log-max-files="5" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714417 4804 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714426 4804 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714435 4804 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714444 4804 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714453 4804 flags.go:64] FLAG: --contention-profiling="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714463 4804 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714472 4804 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714481 4804 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714490 4804 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714502 4804 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714511 4804 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714521 4804 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714529 4804 flags.go:64] FLAG: --enable-load-reader="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714538 4804 flags.go:64] FLAG: --enable-server="true" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714547 4804 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714559 4804 flags.go:64] FLAG: --event-burst="100" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714569 4804 flags.go:64] FLAG: --event-qps="50" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714578 4804 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714586 4804 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714599 4804 flags.go:64] FLAG: --eviction-hard="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714609 4804 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714619 4804 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714628 4804 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714637 4804 flags.go:64] FLAG: --eviction-soft="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714646 4804 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714655 4804 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714665 4804 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714674 4804 flags.go:64] FLAG: --experimental-mounter-path="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714683 4804 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714692 4804 flags.go:64] FLAG: --fail-swap-on="true" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714700 4804 flags.go:64] FLAG: --feature-gates="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714712 4804 flags.go:64] FLAG: --file-check-frequency="20s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714721 4804 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714730 4804 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714739 4804 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714749 4804 flags.go:64] FLAG: --healthz-port="10248" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714758 4804 flags.go:64] FLAG: --help="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714767 4804 flags.go:64] FLAG: --hostname-override="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714775 4804 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714785 4804 flags.go:64] FLAG: --http-check-frequency="20s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714794 4804 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714802 4804 flags.go:64] FLAG: --image-credential-provider-config="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714812 4804 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714820 4804 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714829 4804 flags.go:64] FLAG: --image-service-endpoint="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714838 4804 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714846 4804 flags.go:64] FLAG: --kube-api-burst="100" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714855 4804 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714865 4804 flags.go:64] FLAG: --kube-api-qps="50" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714873 4804 flags.go:64] FLAG: --kube-reserved="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714912 4804 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714922 4804 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714931 4804 flags.go:64] FLAG: --kubelet-cgroups="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714940 4804 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714948 4804 flags.go:64] FLAG: --lock-file="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714959 4804 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714968 4804 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714978 4804 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.714993 4804 flags.go:64] FLAG: --log-json-split-stream="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715001 4804 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715010 4804 flags.go:64] FLAG: --log-text-split-stream="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715019 4804 flags.go:64] FLAG: --logging-format="text" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715027 4804 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715037 4804 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715046 4804 flags.go:64] FLAG: --manifest-url="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715055 4804 flags.go:64] FLAG: --manifest-url-header="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715067 4804 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715078 4804 flags.go:64] FLAG: --max-open-files="1000000" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715093 4804 flags.go:64] FLAG: --max-pods="110" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715104 4804 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715115 4804 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715126 4804 flags.go:64] FLAG: --memory-manager-policy="None" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715138 4804 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715149 4804 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715159 4804 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715168 4804 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715187 4804 flags.go:64] FLAG: --node-status-max-images="50" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715196 4804 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715206 4804 flags.go:64] FLAG: --oom-score-adj="-999" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715215 4804 flags.go:64] FLAG: --pod-cidr="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715224 4804 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715238 4804 flags.go:64] FLAG: --pod-manifest-path="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715247 4804 flags.go:64] FLAG: --pod-max-pids="-1" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715256 4804 flags.go:64] FLAG: --pods-per-core="0" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715265 4804 flags.go:64] FLAG: --port="10250" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715274 4804 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715283 4804 flags.go:64] FLAG: --provider-id="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715292 4804 flags.go:64] FLAG: --qos-reserved="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715301 4804 flags.go:64] FLAG: --read-only-port="10255" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715311 4804 flags.go:64] FLAG: --register-node="true" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715320 4804 flags.go:64] FLAG: --register-schedulable="true" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715330 4804 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715345 4804 flags.go:64] FLAG: --registry-burst="10" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715355 4804 flags.go:64] FLAG: --registry-qps="5" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715364 4804 flags.go:64] FLAG: --reserved-cpus="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715373 4804 flags.go:64] FLAG: --reserved-memory="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715384 4804 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715393 4804 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715402 4804 flags.go:64] FLAG: --rotate-certificates="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715411 4804 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715420 4804 flags.go:64] FLAG: --runonce="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715428 4804 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715438 4804 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715447 4804 flags.go:64] FLAG: --seccomp-default="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715456 4804 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715464 4804 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715474 4804 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715483 4804 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715492 4804 flags.go:64] FLAG: --storage-driver-password="root" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715501 4804 flags.go:64] FLAG: --storage-driver-secure="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715510 4804 flags.go:64] FLAG: --storage-driver-table="stats" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715519 4804 flags.go:64] FLAG: --storage-driver-user="root" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715528 4804 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715537 4804 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715546 4804 flags.go:64] FLAG: --system-cgroups="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715555 4804 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715569 4804 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715579 4804 flags.go:64] FLAG: --tls-cert-file="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715588 4804 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715598 4804 flags.go:64] FLAG: --tls-min-version="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715608 4804 flags.go:64] FLAG: --tls-private-key-file="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715617 4804 flags.go:64] FLAG: --topology-manager-policy="none" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715626 4804 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715635 4804 flags.go:64] FLAG: --topology-manager-scope="container" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715644 4804 flags.go:64] FLAG: --v="2" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715656 4804 flags.go:64] FLAG: --version="false" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715667 4804 flags.go:64] FLAG: --vmodule="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715678 4804 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.715688 4804 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.715962 4804 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.715977 4804 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.715987 4804 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.715996 4804 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716007 4804 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716018 4804 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716027 4804 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716035 4804 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716043 4804 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716051 4804 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716059 4804 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716067 4804 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716075 4804 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716083 4804 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716090 4804 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716098 4804 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716108 4804 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716118 4804 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716127 4804 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716136 4804 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716147 4804 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716166 4804 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716181 4804 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716191 4804 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716202 4804 feature_gate.go:330] unrecognized feature gate: Example Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716213 4804 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716222 4804 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716230 4804 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716237 4804 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716246 4804 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716253 4804 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716261 4804 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716269 4804 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716277 4804 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716288 4804 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716296 4804 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716303 4804 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716311 4804 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716319 4804 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716326 4804 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716334 4804 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716341 4804 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716349 4804 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716357 4804 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716366 4804 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716375 4804 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716386 4804 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716396 4804 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716406 4804 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716418 4804 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716427 4804 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716435 4804 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716446 4804 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716456 4804 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716466 4804 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716475 4804 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716483 4804 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716491 4804 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716499 4804 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716507 4804 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716515 4804 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716525 4804 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716535 4804 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716544 4804 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716552 4804 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716560 4804 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716569 4804 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716577 4804 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716584 4804 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716592 4804 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.716601 4804 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.716626 4804 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.726330 4804 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.726362 4804 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726450 4804 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726459 4804 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726467 4804 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726475 4804 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726481 4804 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726487 4804 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726493 4804 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726498 4804 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726505 4804 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726514 4804 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726521 4804 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726527 4804 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726533 4804 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726538 4804 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726544 4804 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726549 4804 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726554 4804 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726559 4804 feature_gate.go:330] unrecognized feature gate: Example Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726565 4804 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726570 4804 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726575 4804 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726582 4804 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726589 4804 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726594 4804 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726600 4804 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726605 4804 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726610 4804 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726617 4804 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726622 4804 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726627 4804 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726632 4804 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726639 4804 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726644 4804 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726649 4804 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726656 4804 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726661 4804 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726666 4804 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726671 4804 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726676 4804 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726681 4804 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726687 4804 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726692 4804 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726698 4804 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726703 4804 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726708 4804 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726714 4804 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726719 4804 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726724 4804 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726729 4804 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726734 4804 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726739 4804 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726745 4804 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726750 4804 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726755 4804 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726760 4804 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726765 4804 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726770 4804 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726776 4804 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726781 4804 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726787 4804 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726792 4804 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726797 4804 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726802 4804 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726808 4804 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726813 4804 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726819 4804 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726826 4804 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726833 4804 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726839 4804 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726844 4804 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.726851 4804 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.726861 4804 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727063 4804 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727077 4804 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727085 4804 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727093 4804 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727099 4804 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727105 4804 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727110 4804 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727117 4804 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727124 4804 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727130 4804 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727137 4804 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727142 4804 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727148 4804 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727154 4804 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727159 4804 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727165 4804 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727170 4804 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727176 4804 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727182 4804 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727188 4804 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727193 4804 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727199 4804 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727204 4804 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727209 4804 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727215 4804 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727220 4804 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727225 4804 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727230 4804 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727235 4804 feature_gate.go:330] unrecognized feature gate: Example Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727240 4804 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727246 4804 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727251 4804 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727256 4804 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727261 4804 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727266 4804 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727272 4804 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727277 4804 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727282 4804 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727287 4804 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727292 4804 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727297 4804 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727303 4804 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727308 4804 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727313 4804 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727318 4804 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727323 4804 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727328 4804 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727333 4804 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727338 4804 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727345 4804 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727351 4804 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727357 4804 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727363 4804 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727368 4804 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727373 4804 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727379 4804 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727387 4804 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727392 4804 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727398 4804 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727404 4804 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727409 4804 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727414 4804 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727419 4804 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727425 4804 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727430 4804 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727435 4804 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727440 4804 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727445 4804 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727450 4804 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727455 4804 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.727461 4804 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.727470 4804 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.729598 4804 server.go:940] "Client rotation is on, will bootstrap in background" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.736335 4804 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.736418 4804 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.739151 4804 server.go:997] "Starting client certificate rotation" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.739199 4804 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.739473 4804 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-22 00:57:21.2808899 +0000 UTC Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.740016 4804 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.762495 4804 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.764665 4804 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.767586 4804 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.783203 4804 log.go:25] "Validated CRI v1 runtime API" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.815082 4804 log.go:25] "Validated CRI v1 image API" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.817089 4804 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.824457 4804 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-28-11-17-06-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.824501 4804 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.839634 4804 manager.go:217] Machine: {Timestamp:2026-01-28 11:22:04.836458288 +0000 UTC m=+0.631338282 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c53b59f0-ae5a-4211-87ed-9529d3bdae0b BootID:7f97964b-a8a7-425a-af7c-d85a989338ac Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:60:3d:55 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:60:3d:55 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:9f:30:14 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f9:4b:a0 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:3e:d7:81 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:df:fa:9b Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:30:05:ef Speed:-1 Mtu:1496} {Name:eth10 MacAddress:c2:54:e6:b2:0b:81 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b2:04:95:d3:75:27 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.839847 4804 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.840058 4804 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.840676 4804 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.840896 4804 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.840930 4804 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.841157 4804 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.841169 4804 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.841802 4804 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.841832 4804 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.842067 4804 state_mem.go:36] "Initialized new in-memory state store" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.842159 4804 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.846743 4804 kubelet.go:418] "Attempting to sync node with API server" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.846776 4804 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.846811 4804 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.846837 4804 kubelet.go:324] "Adding apiserver pod source" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.846855 4804 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.852744 4804 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.853080 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.853148 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.853427 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.853482 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.854017 4804 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.857544 4804 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.859113 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.859149 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.859163 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.859174 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.859189 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.859199 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.859210 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.859224 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.859235 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.859245 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.859258 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.859267 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.860230 4804 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.860741 4804 server.go:1280] "Started kubelet" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.861126 4804 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.861572 4804 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.861764 4804 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.863016 4804 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 11:22:04 crc systemd[1]: Started Kubernetes Kubelet. Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.864391 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.864429 4804 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.864637 4804 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.864656 4804 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.864712 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 13:10:36.150446762 +0000 UTC Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.864739 4804 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.864787 4804 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.865996 4804 factory.go:55] Registering systemd factory Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.866024 4804 factory.go:221] Registration of the systemd container factory successfully Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.865971 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.866054 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.866591 4804 factory.go:153] Registering CRI-O factory Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.866610 4804 factory.go:221] Registration of the crio container factory successfully Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.866669 4804 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.866698 4804 factory.go:103] Registering Raw factory Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.866714 4804 manager.go:1196] Started watching for new ooms in manager Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.867410 4804 manager.go:319] Starting recovery of all containers Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.868649 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="200ms" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.869516 4804 server.go:460] "Adding debug handlers to kubelet server" Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.870414 4804 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.27:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ee135d1180466 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 11:22:04.860712038 +0000 UTC m=+0.655592022,LastTimestamp:2026-01-28 11:22:04.860712038 +0000 UTC m=+0.655592022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.881135 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882210 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882312 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882381 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882444 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882505 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882565 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882645 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.883695 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.883804 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.883918 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.884551 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885291 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885324 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885404 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885431 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885446 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885464 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885480 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885494 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885511 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885529 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885548 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885571 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885590 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885609 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885631 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885651 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885697 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885716 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885732 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885750 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885764 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885780 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885794 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885810 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885826 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885841 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885860 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885896 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885913 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885930 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885947 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885976 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885995 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886015 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886055 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886074 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886091 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886108 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886128 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886150 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886176 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886196 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886214 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886234 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886254 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886272 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886294 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886310 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886328 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886345 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886365 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886383 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886400 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886417 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886432 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886450 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886466 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886483 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886499 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886520 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886536 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886552 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886568 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886584 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886600 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886615 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886631 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886650 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886666 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886681 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886696 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886712 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886728 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886743 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886760 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886775 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886792 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886807 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886825 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886840 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886857 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886873 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886909 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886929 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886948 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886948 4804 manager.go:324] Recovery completed Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886965 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887080 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887127 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887140 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887151 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887161 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887172 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887192 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887205 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887220 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887235 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887247 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887260 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887274 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887285 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887297 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887308 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887318 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887330 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887341 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887352 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887362 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887371 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887384 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887396 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887411 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887423 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887455 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887466 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887479 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887490 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887501 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887514 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887526 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887537 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887548 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887559 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887570 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887583 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887594 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887604 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887614 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887624 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887636 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887647 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887659 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887669 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887681 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887690 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887699 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887709 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887722 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887731 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887741 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887750 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887759 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887769 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887780 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887792 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887806 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887816 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887828 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887839 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887852 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887864 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887874 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887905 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887920 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887937 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.892893 4804 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.892947 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.892967 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.892980 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.892993 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893005 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893020 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893055 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893067 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893077 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893088 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893100 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893111 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893121 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893131 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893142 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893152 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893165 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893176 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893185 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893195 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893207 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893217 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893226 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893236 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893246 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893255 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893265 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893275 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893287 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893299 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893313 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893326 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893372 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893386 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893398 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893409 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893420 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893432 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893443 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893454 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893465 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893476 4804 reconstruct.go:97] "Volume reconstruction finished" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893484 4804 reconciler.go:26] "Reconciler: start to sync state" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.898838 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.900847 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.900906 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.900922 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.901625 4804 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.901641 4804 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.901657 4804 state_mem.go:36] "Initialized new in-memory state store" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.912000 4804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.913698 4804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.913733 4804 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.913753 4804 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.913792 4804 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.915319 4804 policy_none.go:49] "None policy: Start" Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.917694 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.917783 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.918418 4804 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.918445 4804 state_mem.go:35] "Initializing new in-memory state store" Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.965710 4804 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.971088 4804 manager.go:334] "Starting Device Plugin manager" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.971195 4804 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.971213 4804 server.go:79] "Starting device plugin registration server" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.971686 4804 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.971709 4804 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.971922 4804 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.972019 4804 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.972037 4804 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.978650 4804 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.013922 4804 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.014071 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.015330 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.015366 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.015377 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.015543 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.015952 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.016036 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.016867 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.016913 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.016923 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.017074 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.017273 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.017316 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.017580 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.017619 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.017632 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018532 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018558 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018570 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018766 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018798 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018809 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018941 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019103 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019134 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019747 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019766 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019788 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019868 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019930 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019950 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020422 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020442 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020449 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020714 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020739 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020748 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020860 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020891 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020899 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.021020 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.021044 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.021819 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.021839 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.021847 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: E0128 11:22:05.069559 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="400ms" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.071816 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.073067 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.073132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.073148 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.073189 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:05 crc kubenswrapper[4804]: E0128 11:22:05.073987 4804 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096115 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096160 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096181 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096200 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096219 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096234 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096249 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096269 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096382 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096430 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096469 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096581 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096717 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096739 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096774 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198644 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198713 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198738 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198761 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198786 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198806 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198829 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198910 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198940 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198965 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198969 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199010 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198990 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199105 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199125 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199144 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199106 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199187 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199189 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198928 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199141 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199042 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199214 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199161 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199127 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199234 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199145 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199233 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198976 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199052 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.274979 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.276107 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.276138 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.276150 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.276175 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:05 crc kubenswrapper[4804]: E0128 11:22:05.276564 4804 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.360189 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.368506 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.374275 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: W0128 11:22:05.399945 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-69a8bf24342144c68594b960d32a0a5763f3059974fdb54fe19cc2be48c9696b WatchSource:0}: Error finding container 69a8bf24342144c68594b960d32a0a5763f3059974fdb54fe19cc2be48c9696b: Status 404 returned error can't find the container with id 69a8bf24342144c68594b960d32a0a5763f3059974fdb54fe19cc2be48c9696b Jan 28 11:22:05 crc kubenswrapper[4804]: W0128 11:22:05.401089 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-2f32acf3744266b6cdad33a58d4549231a2a17c2f8ceea528c00a746c9d348f4 WatchSource:0}: Error finding container 2f32acf3744266b6cdad33a58d4549231a2a17c2f8ceea528c00a746c9d348f4: Status 404 returned error can't find the container with id 2f32acf3744266b6cdad33a58d4549231a2a17c2f8ceea528c00a746c9d348f4 Jan 28 11:22:05 crc kubenswrapper[4804]: W0128 11:22:05.403877 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-24f9c06936dca3e5060ab5bf7ab2a4c48db5c6590d8da3d1c5f7a7c5edec7941 WatchSource:0}: Error finding container 24f9c06936dca3e5060ab5bf7ab2a4c48db5c6590d8da3d1c5f7a7c5edec7941: Status 404 returned error can't find the container with id 24f9c06936dca3e5060ab5bf7ab2a4c48db5c6590d8da3d1c5f7a7c5edec7941 Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.405403 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.411266 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: W0128 11:22:05.417920 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6bf5c899aabdc28fad671b9efcce89c4423086faf052023550b45fe941eaafe1 WatchSource:0}: Error finding container 6bf5c899aabdc28fad671b9efcce89c4423086faf052023550b45fe941eaafe1: Status 404 returned error can't find the container with id 6bf5c899aabdc28fad671b9efcce89c4423086faf052023550b45fe941eaafe1 Jan 28 11:22:05 crc kubenswrapper[4804]: W0128 11:22:05.432298 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-20d319a0c54a801cc25791813db99213f1dc87ee71f36201e9e4f6d1a99c5356 WatchSource:0}: Error finding container 20d319a0c54a801cc25791813db99213f1dc87ee71f36201e9e4f6d1a99c5356: Status 404 returned error can't find the container with id 20d319a0c54a801cc25791813db99213f1dc87ee71f36201e9e4f6d1a99c5356 Jan 28 11:22:05 crc kubenswrapper[4804]: E0128 11:22:05.470175 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="800ms" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.677257 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.679146 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.679206 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.679218 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.679250 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:05 crc kubenswrapper[4804]: E0128 11:22:05.679883 4804 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.862162 4804 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.865230 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:44:32.319393629 +0000 UTC Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.920053 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6bf5c899aabdc28fad671b9efcce89c4423086faf052023550b45fe941eaafe1"} Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.921911 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2f32acf3744266b6cdad33a58d4549231a2a17c2f8ceea528c00a746c9d348f4"} Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.922872 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"24f9c06936dca3e5060ab5bf7ab2a4c48db5c6590d8da3d1c5f7a7c5edec7941"} Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.923725 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"69a8bf24342144c68594b960d32a0a5763f3059974fdb54fe19cc2be48c9696b"} Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.924551 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"20d319a0c54a801cc25791813db99213f1dc87ee71f36201e9e4f6d1a99c5356"} Jan 28 11:22:06 crc kubenswrapper[4804]: W0128 11:22:06.187360 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.187818 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:06 crc kubenswrapper[4804]: W0128 11:22:06.223223 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.223303 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.271877 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="1.6s" Jan 28 11:22:06 crc kubenswrapper[4804]: W0128 11:22:06.319536 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.319597 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:06 crc kubenswrapper[4804]: W0128 11:22:06.469765 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.469878 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.480566 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.482387 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.482422 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.482438 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.482463 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.482945 4804 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.861898 4804 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.866083 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 17:27:04.884503652 +0000 UTC Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.911270 4804 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.912506 4804 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.931038 4804 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb" exitCode=0 Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.931138 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.931150 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.931999 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.932050 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.932064 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.933473 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.933516 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.933524 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.933532 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.933637 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.934428 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.934473 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.934487 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.934918 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1" exitCode=0 Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.934946 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.935082 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.935800 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.935818 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.935826 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.936121 4804 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef" exitCode=0 Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.936173 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.936234 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937359 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937374 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937398 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937409 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937712 4804 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7540e973732facca4f893f6091ee46a0a9aca077a48c75dfec8d5a4f8816cfb0" exitCode=0 Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937741 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7540e973732facca4f893f6091ee46a0a9aca077a48c75dfec8d5a4f8816cfb0"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937775 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.938136 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.938161 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.938172 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.938519 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.938541 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.938550 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.862002 4804 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.866335 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:10:16.011717552 +0000 UTC Jan 28 11:22:07 crc kubenswrapper[4804]: E0128 11:22:07.873120 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="3.2s" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.944579 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"af8f4d9cbcd2486a41c7ef311707360b23e4c873cccb7bc35b75f90bf9831039"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.944649 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.947204 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.947249 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.947261 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.947364 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.948119 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.948147 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.948156 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.948907 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.948925 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.948934 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.951674 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.951702 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.951713 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.951724 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.953453 4804 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c" exitCode=0 Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.953553 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.953985 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.954250 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.954581 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.954601 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.954611 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.955127 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.955144 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.955152 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:07 crc kubenswrapper[4804]: W0128 11:22:07.957219 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:07 crc kubenswrapper[4804]: E0128 11:22:07.957334 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.083527 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.084591 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.084620 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.084629 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.084675 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:08 crc kubenswrapper[4804]: E0128 11:22:08.085167 4804 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 28 11:22:08 crc kubenswrapper[4804]: W0128 11:22:08.088932 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:08 crc kubenswrapper[4804]: E0128 11:22:08.091035 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:08 crc kubenswrapper[4804]: W0128 11:22:08.560475 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:08 crc kubenswrapper[4804]: E0128 11:22:08.560627 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.866785 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:21:50.922248677 +0000 UTC Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.961574 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b"} Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.961711 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.963024 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.963090 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.963105 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.965934 4804 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c" exitCode=0 Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.966013 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c"} Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.966059 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.966100 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.966136 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.966201 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968127 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968136 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968166 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968206 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968234 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968239 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968371 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968209 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.867018 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:47:23.918256267 +0000 UTC Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.945987 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973626 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad"} Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973669 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973697 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418"} Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973714 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0"} Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973727 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8"} Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973730 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973740 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01"} Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973717 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.974753 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.974783 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.974795 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.974844 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.974867 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.974876 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.223665 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.223851 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.225183 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.225265 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.225295 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.867724 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:56:48.62020736 +0000 UTC Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.976732 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.976784 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.976806 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.978531 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.978643 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.978667 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.978961 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.979022 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.979050 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.247829 4804 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.286254 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.288733 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.288799 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.288821 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.288861 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.322229 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.322557 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.324292 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.324384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.324410 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.328580 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.867942 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:40:13.553580864 +0000 UTC Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.880318 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.979218 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.980359 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.980395 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.980406 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.688623 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.868693 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:01:18.626430416 +0000 UTC Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.982846 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.982955 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.984337 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.984414 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.984435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.076572 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.077209 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.077461 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.079195 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.079259 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.079283 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.869188 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:33:29.993086529 +0000 UTC Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.925509 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.925825 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.927963 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.928023 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.928040 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.985555 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.985611 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.986647 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.986686 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.986699 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.009397 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.009676 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.011302 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.011347 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.011365 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.137177 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.311795 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.312003 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.313967 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.314021 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.314039 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.869978 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:56:49.097254149 +0000 UTC Jan 28 11:22:14 crc kubenswrapper[4804]: E0128 11:22:14.978737 4804 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.987550 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.988749 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.988781 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.988790 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:15 crc kubenswrapper[4804]: I0128 11:22:15.689203 4804 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 11:22:15 crc kubenswrapper[4804]: I0128 11:22:15.689304 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 11:22:15 crc kubenswrapper[4804]: I0128 11:22:15.870996 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:43:05.735931818 +0000 UTC Jan 28 11:22:16 crc kubenswrapper[4804]: I0128 11:22:16.871647 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:29:10.441157274 +0000 UTC Jan 28 11:22:17 crc kubenswrapper[4804]: I0128 11:22:17.872438 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 21:11:59.098011467 +0000 UTC Jan 28 11:22:18 crc kubenswrapper[4804]: W0128 11:22:18.774043 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.774459 4804 trace.go:236] Trace[939492508]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 11:22:08.772) (total time: 10002ms): Jan 28 11:22:18 crc kubenswrapper[4804]: Trace[939492508]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:18.774) Jan 28 11:22:18 crc kubenswrapper[4804]: Trace[939492508]: [10.002320027s] [10.002320027s] END Jan 28 11:22:18 crc kubenswrapper[4804]: E0128 11:22:18.774481 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.862858 4804 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.873387 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:30:53.75753583 +0000 UTC Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.962344 4804 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.962428 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.969856 4804 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.969947 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.998469 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.000367 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b" exitCode=255 Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.000435 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b"} Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.000812 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.003629 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.003671 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.003686 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.004203 4804 scope.go:117] "RemoveContainer" containerID="dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b" Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.873795 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:02:02.545051435 +0000 UTC Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.006459 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.009096 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e"} Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.009418 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.010997 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.011040 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.011059 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.874947 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:47:10.722942249 +0000 UTC Jan 28 11:22:21 crc kubenswrapper[4804]: I0128 11:22:21.876039 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 17:19:29.104614141 +0000 UTC Jan 28 11:22:22 crc kubenswrapper[4804]: I0128 11:22:22.876497 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:08:35.909577924 +0000 UTC Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.083485 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.083636 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.083764 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.084853 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.084916 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.084932 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.089114 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.113709 4804 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.859689 4804 apiserver.go:52] "Watching apiserver" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.866261 4804 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.866470 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.866725 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.866913 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.867000 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.867074 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:23 crc kubenswrapper[4804]: E0128 11:22:23.867073 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.867149 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.867264 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:23 crc kubenswrapper[4804]: E0128 11:22:23.867286 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:23 crc kubenswrapper[4804]: E0128 11:22:23.867304 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.869061 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.870106 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.870210 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.870267 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.872392 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.872482 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.872859 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.874532 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.876579 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:03:58.773462429 +0000 UTC Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.883012 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.926512 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.938154 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.949086 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.949215 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.960210 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.960909 4804 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:23 crc kubenswrapper[4804]: E0128 11:22:23.961128 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.961357 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.962604 4804 trace.go:236] Trace[1821295227]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 11:22:13.512) (total time: 10450ms): Jan 28 11:22:23 crc kubenswrapper[4804]: Trace[1821295227]: ---"Objects listed" error: 10450ms (11:22:23.962) Jan 28 11:22:23 crc kubenswrapper[4804]: Trace[1821295227]: [10.450077397s] [10.450077397s] END Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.962630 4804 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.964483 4804 trace.go:236] Trace[211428802]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 11:22:13.329) (total time: 10635ms): Jan 28 11:22:23 crc kubenswrapper[4804]: Trace[211428802]: ---"Objects listed" error: 10634ms (11:22:23.964) Jan 28 11:22:23 crc kubenswrapper[4804]: Trace[211428802]: [10.635098218s] [10.635098218s] END Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.964523 4804 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:23 crc kubenswrapper[4804]: E0128 11:22:23.964782 4804 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.965242 4804 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.965417 4804 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.967572 4804 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.967998 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.972217 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.981007 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.982645 4804 csr.go:261] certificate signing request csr-rjlfw is approved, waiting to be issued Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.990564 4804 csr.go:257] certificate signing request csr-rjlfw is issued Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.990718 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.001451 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.009949 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.012416 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.016706 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.023578 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.025299 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.027253 4804 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.033579 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.052380 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.064463 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065728 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065781 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065816 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065843 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065861 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065909 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065929 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065948 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066018 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066177 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066198 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066267 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066482 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066531 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066579 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066666 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066679 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066833 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066908 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066948 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066975 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066977 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066985 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067002 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067002 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067075 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067097 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067101 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067141 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067179 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067144 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067210 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067238 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067264 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067330 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067383 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067416 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067440 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067464 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067469 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067488 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067522 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067481 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.067551 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:22:24.567531116 +0000 UTC m=+20.362411110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067815 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067818 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067742 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067840 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067782 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067743 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067861 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067876 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067915 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067938 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067958 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067961 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068178 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068204 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068225 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068248 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068274 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068302 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068329 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067975 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068347 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068367 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068386 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068406 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068422 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068443 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068463 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068488 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068511 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068535 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068558 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068581 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068602 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068626 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068648 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068720 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068745 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068767 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068789 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068809 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068830 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068849 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068906 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068941 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068963 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068984 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069004 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069025 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069048 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069073 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069097 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069119 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069139 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069164 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069184 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069235 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069257 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069280 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069302 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069324 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069351 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069374 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069414 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069436 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069459 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069490 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069514 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069538 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069563 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069586 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069627 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069654 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069680 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069709 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069732 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069782 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069807 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069830 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069856 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069878 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069919 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069947 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069968 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069985 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070014 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070039 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070062 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070084 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070106 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070127 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070148 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070178 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070202 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070223 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068294 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068318 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068316 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068364 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068546 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068849 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068873 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068999 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069001 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069152 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069213 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069305 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069429 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069477 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069555 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069573 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069912 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069998 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070549 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070661 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070851 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070982 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071053 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071322 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071487 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071512 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071614 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071617 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071668 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.072067 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.072227 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.072882 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.072762 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.073273 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.073141 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.073786 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.073889 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.073823 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074021 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074084 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074106 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074142 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074147 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074160 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074181 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074196 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074200 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074257 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074276 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074329 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074344 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074352 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074398 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074423 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074444 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074497 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074517 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074532 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074537 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074568 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074589 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074607 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074624 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074644 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074688 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074705 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074720 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074755 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074771 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074787 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074803 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074819 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074835 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074850 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074873 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074907 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074924 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074946 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074965 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074981 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075001 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075018 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075044 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075060 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075078 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075094 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075114 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075911 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075938 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075958 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075978 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075995 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076013 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076030 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076045 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076061 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076076 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076094 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076110 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076129 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076148 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076167 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076184 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076200 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076216 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076234 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076251 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076268 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076284 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076304 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076321 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076337 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076353 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076370 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076387 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076406 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076423 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076450 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076475 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076496 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076519 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076545 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076568 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076826 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076867 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076914 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076987 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077011 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077029 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077090 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077133 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077155 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077175 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077194 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077213 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077234 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077251 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077313 4804 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077325 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077335 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077348 4804 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077359 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077368 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077378 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077386 4804 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077396 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077407 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077440 4804 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077454 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077465 4804 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077741 4804 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077758 4804 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077769 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077778 4804 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077787 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077796 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077806 4804 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077819 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077828 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077837 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077847 4804 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077858 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077998 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078011 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078020 4804 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078029 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078037 4804 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078046 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078055 4804 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078064 4804 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078073 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078558 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078573 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078587 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078600 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078610 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078619 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078630 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078642 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078653 4804 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078665 4804 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078676 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078691 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078702 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078713 4804 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078723 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078732 4804 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078741 4804 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078750 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078763 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078772 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078782 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078791 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078801 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078810 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078819 4804 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078828 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078837 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078849 4804 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078895 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078906 4804 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078915 4804 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.080417 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.084913 4804 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074563 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075694 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076077 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076145 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076178 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076385 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076383 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076529 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077099 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077188 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077305 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077383 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077402 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077567 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077603 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077795 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077997 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078042 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078467 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078502 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078773 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078944 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078985 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.079120 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077639 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.079314 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.079921 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.081744 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.081805 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.081991 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.082321 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.082364 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.082499 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.082681 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.082730 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.083839 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.087552 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:24.587535495 +0000 UTC m=+20.382415479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.087863 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.087871 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.085823 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.086199 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.087176 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.087187 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.088162 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.088213 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.089098 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.089518 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.089519 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.089557 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090100 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090147 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090441 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090535 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090867 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090923 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090926 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091042 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091176 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091258 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091283 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091349 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091598 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091638 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091899 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091944 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.092170 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.092186 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.093066 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.093317 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.093580 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.093799 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.094163 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.094554 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.094650 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.094690 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.095225 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.095406 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.095483 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.095579 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.083953 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.095739 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:24.595722083 +0000 UTC m=+20.390602057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.095742 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.096347 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.096370 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.096468 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.096514 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.096831 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.096874 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.097217 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.098004 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.097397 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098205 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098274 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098335 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098440 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:24.598423618 +0000 UTC m=+20.393303602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.098595 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098778 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098804 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098816 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.098824 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.098834 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098855 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:24.598842921 +0000 UTC m=+20.393722965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.099051 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.099345 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.099630 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.100762 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.101201 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.101920 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.102007 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.102651 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.103068 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.103737 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.103828 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.103996 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.104039 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.104155 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.105382 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.107162 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.107488 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.108403 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.108438 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.108912 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.109083 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.109382 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.109500 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.109495 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.110540 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.110702 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.111077 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.111094 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.111562 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.111645 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.112419 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.113059 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.113334 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.113642 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.113781 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.114377 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.114415 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.114643 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.115453 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.116275 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.120114 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.120314 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.120582 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.120717 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.128017 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.129404 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.133302 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.139947 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.142616 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.145750 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.152658 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.162630 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.167738 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.175316 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-r6hvc"] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.175629 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.176954 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.178563 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.179255 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181259 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181289 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181365 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181380 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181396 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181408 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181421 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181432 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181442 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181450 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181458 4804 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181465 4804 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181475 4804 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181485 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181496 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181507 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181517 4804 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181528 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181539 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181540 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181549 4804 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181571 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181582 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181593 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181604 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181616 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181627 4804 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181639 4804 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181650 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181662 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181674 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181685 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181696 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181742 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181754 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181764 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181775 4804 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181786 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181796 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181806 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181817 4804 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181828 4804 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181841 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181851 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181861 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181872 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181903 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181914 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181925 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181936 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181946 4804 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181968 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181979 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181998 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182011 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182022 4804 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182034 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182045 4804 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182056 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182067 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182077 4804 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182087 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182099 4804 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182111 4804 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182123 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182135 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182146 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182158 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182169 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182180 4804 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182193 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182207 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182220 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182231 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182242 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182254 4804 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182267 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182277 4804 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182288 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182299 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182311 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182321 4804 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182332 4804 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182343 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182354 4804 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182366 4804 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182377 4804 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182388 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182398 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182410 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182421 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182433 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182445 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182457 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182469 4804 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182480 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182493 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182505 4804 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182516 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182617 4804 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182632 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182643 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182653 4804 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182664 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182675 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182686 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182696 4804 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182707 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182717 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182729 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182739 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182750 4804 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182767 4804 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182777 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182788 4804 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182799 4804 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182809 4804 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182820 4804 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182831 4804 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182842 4804 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182853 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182865 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182880 4804 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182915 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182926 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182937 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182948 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182960 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182970 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182981 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182992 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.183002 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.183012 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.188002 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.193375 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.194489 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.195675 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.208983 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.212274 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.250045 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.266011 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.284559 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8e616d20-36f4-4d59-9ce0-d2e18fd63902-hosts-file\") pod \"node-resolver-r6hvc\" (UID: \"8e616d20-36f4-4d59-9ce0-d2e18fd63902\") " pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.284779 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.284973 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kmbz\" (UniqueName: \"kubernetes.io/projected/8e616d20-36f4-4d59-9ce0-d2e18fd63902-kube-api-access-9kmbz\") pod \"node-resolver-r6hvc\" (UID: \"8e616d20-36f4-4d59-9ce0-d2e18fd63902\") " pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.294855 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.305899 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.318912 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.330742 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.341096 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.350788 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.371810 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.382520 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.386940 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kmbz\" (UniqueName: \"kubernetes.io/projected/8e616d20-36f4-4d59-9ce0-d2e18fd63902-kube-api-access-9kmbz\") pod \"node-resolver-r6hvc\" (UID: \"8e616d20-36f4-4d59-9ce0-d2e18fd63902\") " pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.386978 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8e616d20-36f4-4d59-9ce0-d2e18fd63902-hosts-file\") pod \"node-resolver-r6hvc\" (UID: \"8e616d20-36f4-4d59-9ce0-d2e18fd63902\") " pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.387057 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8e616d20-36f4-4d59-9ce0-d2e18fd63902-hosts-file\") pod \"node-resolver-r6hvc\" (UID: \"8e616d20-36f4-4d59-9ce0-d2e18fd63902\") " pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.406699 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kmbz\" (UniqueName: \"kubernetes.io/projected/8e616d20-36f4-4d59-9ce0-d2e18fd63902-kube-api-access-9kmbz\") pod \"node-resolver-r6hvc\" (UID: \"8e616d20-36f4-4d59-9ce0-d2e18fd63902\") " pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.506671 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.517296 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e616d20_36f4_4d59_9ce0_d2e18fd63902.slice/crio-43bd772366ba3f66702e1f1064a93e87bd8e3a43e18cd87ee67a244dfd5e34c0 WatchSource:0}: Error finding container 43bd772366ba3f66702e1f1064a93e87bd8e3a43e18cd87ee67a244dfd5e34c0: Status 404 returned error can't find the container with id 43bd772366ba3f66702e1f1064a93e87bd8e3a43e18cd87ee67a244dfd5e34c0 Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.588594 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.588673 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.588732 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.588759 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:22:25.588731195 +0000 UTC m=+21.383611179 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.588794 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:25.588785087 +0000 UTC m=+21.383665071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.690128 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.690180 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.690204 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690318 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690339 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690351 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690361 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690412 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:25.690393874 +0000 UTC m=+21.485273848 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690375 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690542 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690563 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690461 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:25.690441686 +0000 UTC m=+21.485321670 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690640 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:25.690616281 +0000 UTC m=+21.485496265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.737843 4804 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738208 4804 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738245 4804 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738293 4804 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738314 4804 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.738336 4804 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.27:49756->38.102.83.27:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188ee135f383f2b5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 11:22:05.438210741 +0000 UTC m=+1.233090725,LastTimestamp:2026-01-28 11:22:05.438210741 +0000 UTC m=+1.233090725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738312 4804 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738322 4804 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738329 4804 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738342 4804 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738343 4804 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738361 4804 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738360 4804 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738381 4804 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738399 4804 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738415 4804 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738435 4804 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.877274 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:14:09.77436866 +0000 UTC Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.919083 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.919610 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.920832 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.921514 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.922088 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.922617 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.923216 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.923750 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.924425 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.925014 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.925569 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.926244 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.926700 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.927237 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.927764 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.931280 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.931956 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.932772 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.933387 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.934146 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.935330 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.937155 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.938250 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.938689 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.939675 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.940131 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.940706 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.942024 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.942488 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.945181 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.945677 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.946947 4804 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.947112 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.949292 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.950318 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.950735 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.951999 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.952824 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.953503 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.954363 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.955714 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.956803 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.958996 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.960301 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.961303 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.962507 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.963037 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.966531 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.968343 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.968915 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.970317 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.970805 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.971674 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.972430 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.972982 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.974021 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.974493 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.984248 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.991791 4804 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 11:17:23 +0000 UTC, rotation deadline is 2026-11-17 08:42:04.113839654 +0000 UTC Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.991857 4804 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7029h19m39.121985535s for next certificate rotation Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.993060 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.005239 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.021935 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1a4037378ad3270a2bd06d3a9b2181ac846d876077a93ef221ea8ab024636f39"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.023131 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.023474 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-r6hvc" event={"ID":"8e616d20-36f4-4d59-9ce0-d2e18fd63902","Type":"ContainerStarted","Data":"e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.023531 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-r6hvc" event={"ID":"8e616d20-36f4-4d59-9ce0-d2e18fd63902","Type":"ContainerStarted","Data":"43bd772366ba3f66702e1f1064a93e87bd8e3a43e18cd87ee67a244dfd5e34c0"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.024948 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.024982 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.024999 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d1a209bdf59e2de8ff9d3cc4ffea06668f6befa1dbc4ff3d94c857fc02a9676b"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.026329 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.026377 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c8343671f367a0ebb3e7e9c7ae9174859228b2c5f42b039e39249b2919ced418"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.036652 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.051441 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.064102 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.086181 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.100570 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.120307 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.156576 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.186618 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.218945 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.248813 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.286992 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.301796 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.324967 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.596698 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.596814 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.596976 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.597048 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.597027091 +0000 UTC m=+23.391907115 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.597170 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.597157925 +0000 UTC m=+23.392037909 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.629582 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-slkk8"] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.630251 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rm9ff"] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.630406 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.630935 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lqqmt"] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.631081 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.631245 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-24gvs"] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.631414 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.632345 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.633962 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.634142 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.634349 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.634481 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.634642 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.635022 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.635343 4804 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.635454 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.635591 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.635757 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.635998 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.636140 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.636164 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.636213 4804 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.636229 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.636406 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.636499 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.636551 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.636704 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.636602 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.636862 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.637060 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.637295 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.637392 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.637418 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.637567 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.637846 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.637985 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.650098 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.667778 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.681378 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.692970 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697320 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-netns\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697372 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-bin\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697397 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d901be89-84b0-4249-9548-2e626a112a4c-rootfs\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697426 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-os-release\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697450 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697478 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697503 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697525 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-systemd\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697617 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697684 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-systemd-units\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697709 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.697723 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.697745 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697747 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-cni-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.697758 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697803 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-cni-multus\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697830 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-multus-certs\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.697847 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.697829963 +0000 UTC m=+23.492710007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697864 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-kubelet\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697911 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-cni-bin\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697935 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-etc-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697994 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-system-cni-dir\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698020 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698042 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-ovn-kubernetes\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.698050 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698075 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55hnp\" (UniqueName: \"kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.698086 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.698077311 +0000 UTC m=+23.492957365 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698109 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-netd\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698134 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-cnibin\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698155 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d901be89-84b0-4249-9548-2e626a112a4c-mcd-auth-proxy-config\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698210 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-k8s-cni-cncf-io\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698248 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-conf-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698287 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np5cs\" (UniqueName: \"kubernetes.io/projected/d901be89-84b0-4249-9548-2e626a112a4c-kube-api-access-np5cs\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698320 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-cnibin\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698377 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-socket-dir-parent\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698433 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-daemon-config\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698457 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698482 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/12825f11-ad6e-4db0-87b3-a619c0521c56-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698505 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-system-cni-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698532 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-netns\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698553 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698572 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-kubelet\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698613 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-etc-kubernetes\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698655 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vncf\" (UniqueName: \"kubernetes.io/projected/735b7edc-6f8b-4f5f-a9ca-11964dd78266-kube-api-access-2vncf\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698684 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d901be89-84b0-4249-9548-2e626a112a4c-proxy-tls\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698704 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/12825f11-ad6e-4db0-87b3-a619c0521c56-cni-binary-copy\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698726 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-cni-binary-copy\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698795 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698827 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-os-release\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698853 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bdbf\" (UniqueName: \"kubernetes.io/projected/12825f11-ad6e-4db0-87b3-a619c0521c56-kube-api-access-7bdbf\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698894 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-hostroot\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698919 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-var-lib-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698946 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-slash\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698966 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-ovn\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.699013 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.699041 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.699058 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.699019 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-node-log\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.699120 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.699099172 +0000 UTC m=+23.493979186 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.699156 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-log-socket\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.722239 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.739703 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.740449 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.753518 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.764459 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.776691 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.787956 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.794483 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.799837 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np5cs\" (UniqueName: \"kubernetes.io/projected/d901be89-84b0-4249-9548-2e626a112a4c-kube-api-access-np5cs\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.799876 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.799918 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-cnibin\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.799943 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-socket-dir-parent\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.799968 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-daemon-config\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.799988 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800012 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/12825f11-ad6e-4db0-87b3-a619c0521c56-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800033 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-system-cni-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800053 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-netns\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800073 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d901be89-84b0-4249-9548-2e626a112a4c-proxy-tls\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800093 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/12825f11-ad6e-4db0-87b3-a619c0521c56-cni-binary-copy\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800120 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-kubelet\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800140 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-etc-kubernetes\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800160 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vncf\" (UniqueName: \"kubernetes.io/projected/735b7edc-6f8b-4f5f-a9ca-11964dd78266-kube-api-access-2vncf\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800126 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-socket-dir-parent\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800215 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-cnibin\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800213 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-system-cni-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800211 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-cni-binary-copy\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800261 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-kubelet\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800300 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-etc-kubernetes\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800295 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-netns\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800338 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bdbf\" (UniqueName: \"kubernetes.io/projected/12825f11-ad6e-4db0-87b3-a619c0521c56-kube-api-access-7bdbf\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800385 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-hostroot\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800437 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-var-lib-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800479 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-hostroot\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800556 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-var-lib-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800803 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-os-release\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800818 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/12825f11-ad6e-4db0-87b3-a619c0521c56-cni-binary-copy\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800844 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-log-socket\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800877 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-slash\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800900 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-log-socket\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800938 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-ovn\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800966 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-ovn\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800963 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/12825f11-ad6e-4db0-87b3-a619c0521c56-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801006 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-slash\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801004 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-node-log\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801031 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-node-log\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801052 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-netns\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801036 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-netns\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801080 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-bin\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801097 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-os-release\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801098 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d901be89-84b0-4249-9548-2e626a112a4c-rootfs\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801130 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-bin\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801116 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d901be89-84b0-4249-9548-2e626a112a4c-rootfs\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801144 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801171 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-os-release\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801192 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801206 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-cni-binary-copy\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801292 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801213 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801352 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-os-release\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801375 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-systemd\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801443 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-systemd\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801408 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-systemd-units\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801486 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801529 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-systemd-units\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801506 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-cni-multus\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801603 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-multus-certs\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801655 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-kubelet\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801682 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-multus-certs\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801633 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-cni-multus\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801685 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-cni-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801761 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-cni-bin\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801794 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-cni-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801818 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-kubelet\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801821 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801848 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-cni-bin\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801910 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-etc-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801784 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-etc-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802216 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-system-cni-dir\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802239 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802262 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-ovn-kubernetes\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802290 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-netd\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802312 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55hnp\" (UniqueName: \"kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802334 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-cnibin\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802353 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d901be89-84b0-4249-9548-2e626a112a4c-mcd-auth-proxy-config\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802373 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-k8s-cni-cncf-io\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802390 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-conf-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802438 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-conf-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802464 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-system-cni-dir\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802505 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802531 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-ovn-kubernetes\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802558 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-netd\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802742 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-cnibin\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802972 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-k8s-cni-cncf-io\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.803293 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d901be89-84b0-4249-9548-2e626a112a4c-mcd-auth-proxy-config\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.804140 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.816163 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d901be89-84b0-4249-9548-2e626a112a4c-proxy-tls\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.820492 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np5cs\" (UniqueName: \"kubernetes.io/projected/d901be89-84b0-4249-9548-2e626a112a4c-kube-api-access-np5cs\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.820853 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bdbf\" (UniqueName: \"kubernetes.io/projected/12825f11-ad6e-4db0-87b3-a619c0521c56-kube-api-access-7bdbf\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.822906 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vncf\" (UniqueName: \"kubernetes.io/projected/735b7edc-6f8b-4f5f-a9ca-11964dd78266-kube-api-access-2vncf\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.835407 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.847213 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.858715 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.870648 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.877070 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.878097 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:27:47.85024247 +0000 UTC Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.885817 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.904604 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.914930 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.915093 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.915153 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.915204 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.915252 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.915314 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.918119 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.944658 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.947876 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.958136 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.960061 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.966929 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd901be89_84b0_4249_9548_2e626a112a4c.slice/crio-d17df1a1df878c4fde3785819a02e0d1f47e99e5e3882225419dae33c059262f WatchSource:0}: Error finding container d17df1a1df878c4fde3785819a02e0d1f47e99e5e3882225419dae33c059262f: Status 404 returned error can't find the container with id d17df1a1df878c4fde3785819a02e0d1f47e99e5e3882225419dae33c059262f Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.968800 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12825f11_ad6e_4db0_87b3_a619c0521c56.slice/crio-54eea5cea37a399b4d8f8d6a1e22e1b5ac7e7e77f62c1d564bf040144a483a5a WatchSource:0}: Error finding container 54eea5cea37a399b4d8f8d6a1e22e1b5ac7e7e77f62c1d564bf040144a483a5a: Status 404 returned error can't find the container with id 54eea5cea37a399b4d8f8d6a1e22e1b5ac7e7e77f62c1d564bf040144a483a5a Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.973178 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.985807 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.012745 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.030320 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerStarted","Data":"54eea5cea37a399b4d8f8d6a1e22e1b5ac7e7e77f62c1d564bf040144a483a5a"} Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.032076 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"d17df1a1df878c4fde3785819a02e0d1f47e99e5e3882225419dae33c059262f"} Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.043048 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.079092 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.104919 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.124076 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.158855 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.161185 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.182328 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.222844 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.253438 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.303175 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.311305 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.447136 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.565946 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.677135 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.800931 4804 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801054 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-daemon-config podName:735b7edc-6f8b-4f5f-a9ca-11964dd78266 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.301030214 +0000 UTC m=+23.095910198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-daemon-config") pod "multus-lqqmt" (UID: "735b7edc-6f8b-4f5f-a9ca-11964dd78266") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801316 4804 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801365 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib podName:686039c6-ae16-45ac-bb9f-4c39d57d6c80 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.301354695 +0000 UTC m=+23.096234679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib") pod "ovnkube-node-24gvs" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801397 4804 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801427 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides podName:686039c6-ae16-45ac-bb9f-4c39d57d6c80 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.301419317 +0000 UTC m=+23.096299301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides") pod "ovnkube-node-24gvs" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801428 4804 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801541 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert podName:686039c6-ae16-45ac-bb9f-4c39d57d6c80 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.30151976 +0000 UTC m=+23.096399734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert") pod "ovnkube-node-24gvs" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80") : failed to sync secret cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801605 4804 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801642 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config podName:686039c6-ae16-45ac-bb9f-4c39d57d6c80 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.301634524 +0000 UTC m=+23.096514628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config") pod "ovnkube-node-24gvs" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.817540 4804 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.817599 4804 projected.go:194] Error preparing data for projected volume kube-api-access-55hnp for pod openshift-ovn-kubernetes/ovnkube-node-24gvs: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.817667 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp podName:686039c6-ae16-45ac-bb9f-4c39d57d6c80 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.317648957 +0000 UTC m=+23.112528941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55hnp" (UniqueName: "kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp") pod "ovnkube-node-24gvs" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.878469 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:26:59.409588934 +0000 UTC Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.914690 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.965814 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.966082 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.036020 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba"} Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.037760 4804 generic.go:334] "Generic (PLEG): container finished" podID="12825f11-ad6e-4db0-87b3-a619c0521c56" containerID="8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb" exitCode=0 Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.037822 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerDied","Data":"8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb"} Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.040527 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c"} Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.040666 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5"} Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.052274 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.069824 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.077136 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.082876 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.097814 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.113596 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.136895 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.145499 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.161707 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.173390 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.185793 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.198710 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.199630 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.208832 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.221844 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.236613 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.250070 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.263007 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.277400 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.291853 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.304300 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318049 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318101 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318141 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55hnp\" (UniqueName: \"kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318163 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-daemon-config\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318178 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318195 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318793 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-daemon-config\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.319412 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.319452 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.319475 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.323809 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55hnp\" (UniqueName: \"kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.323910 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.324973 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.336980 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.348513 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.360198 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.392016 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.429650 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.467116 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lqqmt" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.472464 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.473628 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: W0128 11:22:27.483156 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735b7edc_6f8b_4f5f_a9ca_11964dd78266.slice/crio-2f82f936abbd7c4f58d8989baed93b4c736e6378021771d6049fd15b0ed1b07e WatchSource:0}: Error finding container 2f82f936abbd7c4f58d8989baed93b4c736e6378021771d6049fd15b0ed1b07e: Status 404 returned error can't find the container with id 2f82f936abbd7c4f58d8989baed93b4c736e6378021771d6049fd15b0ed1b07e Jan 28 11:22:27 crc kubenswrapper[4804]: W0128 11:22:27.487867 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod686039c6_ae16_45ac_bb9f_4c39d57d6c80.slice/crio-008989ec311365ac3135e782553f5de3886fb749e9f1bd87d34281455159c3df WatchSource:0}: Error finding container 008989ec311365ac3135e782553f5de3886fb749e9f1bd87d34281455159c3df: Status 404 returned error can't find the container with id 008989ec311365ac3135e782553f5de3886fb749e9f1bd87d34281455159c3df Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.516599 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.554829 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.589256 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.621712 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.621844 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.621919 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:22:31.621893023 +0000 UTC m=+27.416773007 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.621999 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.622084 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:31.622067117 +0000 UTC m=+27.416947101 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.722584 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.722634 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.722669 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722812 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722807 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722850 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722951 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:31.722933851 +0000 UTC m=+27.517813835 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722859 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722993 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.723040 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:31.723027684 +0000 UTC m=+27.517907668 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722828 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.723062 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.723080 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:31.723074555 +0000 UTC m=+27.517954539 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.878986 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:47:19.96014038 +0000 UTC Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.914977 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.915002 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.914994 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.915104 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.915255 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.915355 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.045454 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" exitCode=0 Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.045531 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.045847 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"008989ec311365ac3135e782553f5de3886fb749e9f1bd87d34281455159c3df"} Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.047664 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerStarted","Data":"a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489"} Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.049196 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerStarted","Data":"938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7"} Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.049243 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerStarted","Data":"2f82f936abbd7c4f58d8989baed93b4c736e6378021771d6049fd15b0ed1b07e"} Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.063846 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.076196 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.088079 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.098494 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.111480 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.133624 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.178196 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.201983 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.214562 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.226993 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.237746 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.250063 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.260795 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.273476 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.290126 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.303218 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.315389 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.334620 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.348578 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.392319 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.430375 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.470372 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.513566 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.549261 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.589499 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.629492 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.671745 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.715600 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.879803 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:28:12.740816074 +0000 UTC Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.054582 4804 generic.go:334] "Generic (PLEG): container finished" podID="12825f11-ad6e-4db0-87b3-a619c0521c56" containerID="a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489" exitCode=0 Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.054647 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerDied","Data":"a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.059387 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.059442 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.059457 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.059468 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.059479 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.059499 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.071074 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.080366 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.102371 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.114790 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.126106 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.139786 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.152547 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.164309 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.179998 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.200196 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.212502 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.223635 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.244522 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.271627 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.880398 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:09:42.356999931 +0000 UTC Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.913959 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.914048 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:29 crc kubenswrapper[4804]: E0128 11:22:29.914105 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:29 crc kubenswrapper[4804]: E0128 11:22:29.914171 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.914058 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:29 crc kubenswrapper[4804]: E0128 11:22:29.914381 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.066369 4804 generic.go:334] "Generic (PLEG): container finished" podID="12825f11-ad6e-4db0-87b3-a619c0521c56" containerID="b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326" exitCode=0 Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.066421 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerDied","Data":"b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326"} Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.088791 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.113140 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.133988 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.148849 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.164651 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.177616 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.190089 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.204013 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.222096 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.236719 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.248377 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.271152 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.285829 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.298460 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.365727 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.367672 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.367818 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.367941 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.368147 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.374100 4804 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.374340 4804 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.375327 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.375450 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.375542 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.375615 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.375694 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: E0128 11:22:30.388688 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.392269 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.392296 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.392304 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.392317 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.392326 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: E0128 11:22:30.407567 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.411107 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.411243 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.411331 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.411408 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.411479 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: E0128 11:22:30.421791 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.424610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.424691 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.424710 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.424731 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.424747 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: E0128 11:22:30.436279 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.439565 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.439596 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.439608 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.439624 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.439634 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: E0128 11:22:30.449676 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: E0128 11:22:30.449793 4804 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.451010 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.451037 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.451045 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.451058 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.451069 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.553169 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.553389 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.553499 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.553596 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.553688 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.656164 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.656197 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.656206 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.656221 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.656237 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.759345 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.759412 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.759421 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.759441 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.759455 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.862132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.862176 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.862188 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.862204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.862216 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.880832 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:46:11.17471972 +0000 UTC Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.964712 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.964750 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.964758 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.964775 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.964787 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.067373 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.067417 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.067434 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.067450 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.067460 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.070898 4804 generic.go:334] "Generic (PLEG): container finished" podID="12825f11-ad6e-4db0-87b3-a619c0521c56" containerID="00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0" exitCode=0 Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.070979 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerDied","Data":"00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.074437 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.090237 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.103766 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.117853 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.132472 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.147388 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.164426 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.169592 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.169624 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.169634 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.169654 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.169665 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.178016 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.199576 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.211759 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.223436 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.234180 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.245779 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.263812 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.272004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.272050 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.272064 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.272080 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.272097 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.277355 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.374639 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.374671 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.374681 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.374694 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.374704 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.477517 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.477561 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.477574 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.477592 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.477603 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.580026 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.580068 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.580081 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.580097 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.580109 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.581672 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-v88kz"] Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.582031 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.585046 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.585316 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.585363 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.587124 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.601919 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.617899 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.649784 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.664693 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.664788 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.664813 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv4wb\" (UniqueName: \"kubernetes.io/projected/28d27942-1d0e-4433-a349-e1a404557705-kube-api-access-hv4wb\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.664851 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28d27942-1d0e-4433-a349-e1a404557705-serviceca\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.664916 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28d27942-1d0e-4433-a349-e1a404557705-host\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.665008 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:22:39.664995426 +0000 UTC m=+35.459875410 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.665053 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.665083 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:39.665077789 +0000 UTC m=+35.459957773 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.668162 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.682899 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.682932 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.682940 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.682952 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.682961 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.685347 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.702128 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.717752 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.734739 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.747597 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.764648 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765376 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28d27942-1d0e-4433-a349-e1a404557705-serviceca\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765420 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765446 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28d27942-1d0e-4433-a349-e1a404557705-host\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765476 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765511 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv4wb\" (UniqueName: \"kubernetes.io/projected/28d27942-1d0e-4433-a349-e1a404557705-kube-api-access-hv4wb\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765540 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765549 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28d27942-1d0e-4433-a349-e1a404557705-host\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765599 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765630 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765643 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765666 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765708 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:39.765675144 +0000 UTC m=+35.560555128 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765730 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:39.765722976 +0000 UTC m=+35.560602950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765667 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765748 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765757 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765790 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:39.765784428 +0000 UTC m=+35.560664412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.766538 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28d27942-1d0e-4433-a349-e1a404557705-serviceca\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.778146 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.785321 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.785738 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.785752 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.785774 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.785787 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.789957 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv4wb\" (UniqueName: \"kubernetes.io/projected/28d27942-1d0e-4433-a349-e1a404557705-kube-api-access-hv4wb\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.797054 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.810126 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.821861 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.838266 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.881109 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:29:01.117930055 +0000 UTC Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.889137 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.889170 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.889179 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.889192 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.889200 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.897319 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.914295 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.914332 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.914370 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.914426 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.914518 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.914592 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:31 crc kubenswrapper[4804]: W0128 11:22:31.991321 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28d27942_1d0e_4433_a349_e1a404557705.slice/crio-37cd13ba58c9144929bfce380e4b0493b4ca4690173dfe4ce57fb7d72f496740 WatchSource:0}: Error finding container 37cd13ba58c9144929bfce380e4b0493b4ca4690173dfe4ce57fb7d72f496740: Status 404 returned error can't find the container with id 37cd13ba58c9144929bfce380e4b0493b4ca4690173dfe4ce57fb7d72f496740 Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.991611 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.991639 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.991650 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.991663 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.991672 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.080726 4804 generic.go:334] "Generic (PLEG): container finished" podID="12825f11-ad6e-4db0-87b3-a619c0521c56" containerID="bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8" exitCode=0 Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.080791 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerDied","Data":"bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.081824 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-v88kz" event={"ID":"28d27942-1d0e-4433-a349-e1a404557705","Type":"ContainerStarted","Data":"37cd13ba58c9144929bfce380e4b0493b4ca4690173dfe4ce57fb7d72f496740"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.094221 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.094250 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.094260 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.094275 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.094285 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.101002 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.112464 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.124360 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.141213 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.152489 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.166330 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.178682 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.190200 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.196282 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.196326 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.196336 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.196352 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.196365 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.202247 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.214356 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.225320 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.236871 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.252038 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.272682 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.282360 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.299731 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.299767 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.299775 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.299789 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.299800 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.335688 4804 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.402008 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.402043 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.402051 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.402065 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.402075 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.504610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.504674 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.504687 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.504703 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.504713 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.607092 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.607145 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.607156 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.607174 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.607189 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.661573 4804 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.710009 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.710051 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.710063 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.710083 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.710096 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.813539 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.813575 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.813583 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.813597 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.813607 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.881668 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:35:28.47782565 +0000 UTC Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.916607 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.917045 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.917061 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.917078 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.917089 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.020224 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.020275 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.020292 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.020315 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.020331 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.088494 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-v88kz" event={"ID":"28d27942-1d0e-4433-a349-e1a404557705","Type":"ContainerStarted","Data":"fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.093266 4804 generic.go:334] "Generic (PLEG): container finished" podID="12825f11-ad6e-4db0-87b3-a619c0521c56" containerID="f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108" exitCode=0 Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.093326 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerDied","Data":"f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.104072 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.118396 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.122149 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.122191 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.122201 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.122219 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.122235 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.128013 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.151389 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.165433 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.178320 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.197205 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.209862 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.223192 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.224952 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.224987 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.225002 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.225052 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.225066 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.237536 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.257760 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.269140 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.282023 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.295452 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.306932 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.322519 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.327479 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.327517 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.327525 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.327542 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.327552 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.335458 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.345588 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.355611 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.369635 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.385917 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.397773 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.415742 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.429174 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.429875 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.429927 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.429937 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.429956 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.429968 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.441574 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.456209 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.467497 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.479359 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.491318 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.504977 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.532534 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.532569 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.532580 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.532599 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.532611 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.635099 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.635132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.635144 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.635160 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.635187 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.737438 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.737472 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.737480 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.737493 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.737503 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.840218 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.840249 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.840259 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.840274 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.840284 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.882124 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:28:33.225221009 +0000 UTC Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.914560 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.914597 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:33 crc kubenswrapper[4804]: E0128 11:22:33.914672 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.914745 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:33 crc kubenswrapper[4804]: E0128 11:22:33.914855 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:33 crc kubenswrapper[4804]: E0128 11:22:33.914989 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.942665 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.942699 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.942710 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.942726 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.942737 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.045913 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.045952 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.045961 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.045974 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.045984 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.101812 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.102230 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.108291 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerStarted","Data":"6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.123272 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.132278 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.144984 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.149767 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.149805 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.149817 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.149833 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.149846 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.157222 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.168401 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.178010 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.191800 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.203461 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.214066 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.223813 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.240836 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.252118 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.252166 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.252177 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.252194 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.252205 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.254135 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.266525 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.277844 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.289329 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.301844 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.314135 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.315713 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.325497 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.335064 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.344861 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.354757 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.354801 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.354813 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.354828 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.354840 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.358033 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.376068 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.386306 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.403988 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.417001 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.432363 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.444275 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.453621 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.456904 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.456947 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.456984 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.457002 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.457015 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.465079 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.478536 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.491606 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.503628 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.513873 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.526197 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.551418 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.559844 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.559894 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.559907 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.559923 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.559933 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.561581 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.572095 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.583354 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.598035 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.610215 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.624223 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.658661 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.662709 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.662746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.662757 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.662771 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.662780 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.687685 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.706396 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.732191 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.765265 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.765305 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.765315 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.765330 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.765341 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.771324 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.867945 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.867997 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.868005 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.868019 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.868028 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.883126 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:18:54.945636915 +0000 UTC Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.931338 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.944754 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.964462 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.970075 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.970108 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.970119 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.970139 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.970149 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.996483 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.009864 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.025617 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.050607 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.072056 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.072118 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.072133 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.072155 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.072167 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.090824 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.111201 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.111551 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.130215 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.169231 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.173853 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.174583 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.174609 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.174621 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.174636 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.174648 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.210398 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.263312 4804 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.263394 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.278273 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.278345 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.278363 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.278409 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.278433 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.314712 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.356985 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.381302 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.381357 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.381371 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.381405 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.381426 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.394711 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.441111 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.474868 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.484051 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.484092 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.484103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.484122 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.484134 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.511810 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.555134 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.586142 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.586196 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.586209 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.586228 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.586241 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.589663 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.631220 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.670934 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.688320 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.688354 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.688362 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.688375 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.688384 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.709340 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.751206 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.791168 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.791204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.791220 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.791242 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.791255 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.791413 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.828260 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.874858 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.883601 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:32:28.675666613 +0000 UTC Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.895193 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.895257 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.895271 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.895306 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.895323 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.914742 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:35 crc kubenswrapper[4804]: E0128 11:22:35.914955 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.915200 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.915426 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.915446 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:35 crc kubenswrapper[4804]: E0128 11:22:35.915507 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:35 crc kubenswrapper[4804]: E0128 11:22:35.915634 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.952463 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.990977 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.997659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.997710 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.997722 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.997741 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.997752 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.100984 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.101061 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.101077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.101102 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.101119 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.116166 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/0.log" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.123642 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19" exitCode=1 Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.123690 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.124512 4804 scope.go:117] "RemoveContainer" containerID="fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.143556 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.168848 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:36Z\\\",\\\"message\\\":\\\"ressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:36.013676 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.013735 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:36.013744 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:36.013750 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:36.013756 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 11:22:36.013761 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:36.013767 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:36.014012 6108 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.014861 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:36.014910 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:36.014944 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:36.014971 6108 factory.go:656] Stopping watch factory\\\\nI0128 11:22:36.014986 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.182097 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.202845 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.203487 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.203535 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.203549 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.203572 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.203586 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.217310 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.229784 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.277273 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.306412 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.306465 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.306481 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.306502 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.306516 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.311681 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.351830 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.401270 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.409256 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.409316 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.409331 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.409354 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.409368 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.440593 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.471739 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.511445 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.511487 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.511496 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.511512 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.511522 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.513418 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.550128 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.594931 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.613599 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.613650 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.613663 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.613680 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.613694 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.716950 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.716996 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.717009 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.717029 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.717041 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.819244 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.819305 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.819316 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.819340 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.819362 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.884402 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:31:46.453579579 +0000 UTC Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.926735 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.926793 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.926825 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.926938 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.927015 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.030301 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.030350 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.030363 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.030385 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.030401 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.131060 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/0.log" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.132765 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.132806 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.132818 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.132839 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.132851 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.134434 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.134542 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.150847 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.169456 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.192133 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:36Z\\\",\\\"message\\\":\\\"ressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:36.013676 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.013735 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:36.013744 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:36.013750 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:36.013756 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 11:22:36.013761 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:36.013767 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:36.014012 6108 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.014861 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:36.014910 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:36.014944 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:36.014971 6108 factory.go:656] Stopping watch factory\\\\nI0128 11:22:36.014986 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.206342 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.220770 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.235300 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.235338 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.235349 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.235366 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.235378 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.237229 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.253614 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.268746 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.280337 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.292461 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.313578 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.328723 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.338264 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.338313 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.338325 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.338346 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.338361 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.341893 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.354012 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.366502 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.440985 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.441027 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.441039 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.441055 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.441068 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.480906 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj"] Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.481344 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.483640 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.484080 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.497433 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.516648 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.531610 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.543363 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.543403 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.543413 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.543432 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.543442 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.544621 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.556553 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.570451 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.583092 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.604790 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:36Z\\\",\\\"message\\\":\\\"ressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:36.013676 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.013735 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:36.013744 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:36.013750 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:36.013756 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 11:22:36.013761 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:36.013767 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:36.014012 6108 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.014861 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:36.014910 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:36.014944 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:36.014971 6108 factory.go:656] Stopping watch factory\\\\nI0128 11:22:36.014986 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.620212 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.635015 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.648976 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.649740 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.649870 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.650009 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.650098 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.653371 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57lrq\" (UniqueName: \"kubernetes.io/projected/34e3d03d-371f-46d2-946a-6156c9570604-kube-api-access-57lrq\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.653493 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34e3d03d-371f-46d2-946a-6156c9570604-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.653546 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34e3d03d-371f-46d2-946a-6156c9570604-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.653583 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34e3d03d-371f-46d2-946a-6156c9570604-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.673117 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.712941 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.753653 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.753939 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.753985 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754002 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754027 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754046 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754228 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57lrq\" (UniqueName: \"kubernetes.io/projected/34e3d03d-371f-46d2-946a-6156c9570604-kube-api-access-57lrq\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754319 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34e3d03d-371f-46d2-946a-6156c9570604-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754342 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34e3d03d-371f-46d2-946a-6156c9570604-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754391 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34e3d03d-371f-46d2-946a-6156c9570604-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.755192 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34e3d03d-371f-46d2-946a-6156c9570604-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.755474 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34e3d03d-371f-46d2-946a-6156c9570604-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.763250 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34e3d03d-371f-46d2-946a-6156c9570604-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.803203 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57lrq\" (UniqueName: \"kubernetes.io/projected/34e3d03d-371f-46d2-946a-6156c9570604-kube-api-access-57lrq\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.810863 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.852695 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.857746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.857799 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.857818 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.857846 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.857867 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.884992 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:55:52.752987047 +0000 UTC Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.910951 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.915185 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.915286 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:37 crc kubenswrapper[4804]: E0128 11:22:37.915402 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:37 crc kubenswrapper[4804]: E0128 11:22:37.915600 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.915759 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:37 crc kubenswrapper[4804]: E0128 11:22:37.915931 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.961517 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.961626 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.961647 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.961673 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.961702 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.065186 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.065239 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.065259 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.065284 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.065304 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.101339 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:38 crc kubenswrapper[4804]: W0128 11:22:38.119049 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e3d03d_371f_46d2_946a_6156c9570604.slice/crio-7762ec82e64b87ad22755bc0db597c67f7057f9ace1e2ece348a192f58ccbb21 WatchSource:0}: Error finding container 7762ec82e64b87ad22755bc0db597c67f7057f9ace1e2ece348a192f58ccbb21: Status 404 returned error can't find the container with id 7762ec82e64b87ad22755bc0db597c67f7057f9ace1e2ece348a192f58ccbb21 Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.141249 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/1.log" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.142031 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/0.log" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.145464 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700" exitCode=1 Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.145597 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.145733 4804 scope.go:117] "RemoveContainer" containerID="fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.146808 4804 scope.go:117] "RemoveContainer" containerID="d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700" Jan 28 11:22:38 crc kubenswrapper[4804]: E0128 11:22:38.147139 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.152396 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" event={"ID":"34e3d03d-371f-46d2-946a-6156c9570604","Type":"ContainerStarted","Data":"7762ec82e64b87ad22755bc0db597c67f7057f9ace1e2ece348a192f58ccbb21"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.167108 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.167782 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.167930 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.168065 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.168135 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.168212 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.185514 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.212729 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.226948 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.242325 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.255651 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.270610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.270672 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.270685 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.270711 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.270555 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.270726 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.285705 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.301683 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.319839 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.331391 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.374360 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.374769 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.374783 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.374803 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.374817 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.376723 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.411210 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.449897 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.477917 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.477984 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.477999 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.478023 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.478035 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.499697 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.536587 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:36Z\\\",\\\"message\\\":\\\"ressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:36.013676 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.013735 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:36.013744 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:36.013750 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:36.013756 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 11:22:36.013761 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:36.013767 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:36.014012 6108 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.014861 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:36.014910 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:36.014944 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:36.014971 6108 factory.go:656] Stopping watch factory\\\\nI0128 11:22:36.014986 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"oval\\\\nI0128 11:22:37.458146 6245 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 11:22:37.458152 6245 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 11:22:37.458185 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:37.458201 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:37.458209 6245 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:37.458208 6245 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:37.458213 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:37.458250 6245 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 11:22:37.458286 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:37.458305 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 11:22:37.458310 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:37.458318 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:37.458338 6245 factory.go:656] Stopping watch factory\\\\nI0128 11:22:37.458357 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:37.458369 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:37.458387 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.581031 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.581081 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.581094 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.581117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.581131 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.683836 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.683874 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.683914 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.683933 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.683945 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.787800 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.787868 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.787923 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.787958 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.787985 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.885578 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:51:30.788851231 +0000 UTC Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.891219 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.891268 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.891278 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.891298 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.891308 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.955764 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bgqd8"] Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.956660 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:38 crc kubenswrapper[4804]: E0128 11:22:38.956785 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.978489 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.994171 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.994217 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.994231 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.994253 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.994268 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.002566 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.022015 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.044196 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.068281 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:36Z\\\",\\\"message\\\":\\\"ressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:36.013676 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.013735 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:36.013744 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:36.013750 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:36.013756 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 11:22:36.013761 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:36.013767 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:36.014012 6108 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.014861 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:36.014910 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:36.014944 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:36.014971 6108 factory.go:656] Stopping watch factory\\\\nI0128 11:22:36.014986 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"oval\\\\nI0128 11:22:37.458146 6245 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 11:22:37.458152 6245 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 11:22:37.458185 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:37.458201 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:37.458209 6245 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:37.458208 6245 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:37.458213 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:37.458250 6245 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 11:22:37.458286 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:37.458305 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 11:22:37.458310 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:37.458318 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:37.458338 6245 factory.go:656] Stopping watch factory\\\\nI0128 11:22:37.458357 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:37.458369 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:37.458387 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.071765 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxhvp\" (UniqueName: \"kubernetes.io/projected/03844e8b-8d66-4cd7-aa19-51caa1407918-kube-api-access-zxhvp\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.071842 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.081428 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.097207 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.097252 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.097261 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.097277 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.097291 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.114391 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.136578 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.158518 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.158914 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" event={"ID":"34e3d03d-371f-46d2-946a-6156c9570604","Type":"ContainerStarted","Data":"022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.159010 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" event={"ID":"34e3d03d-371f-46d2-946a-6156c9570604","Type":"ContainerStarted","Data":"0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.161454 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/1.log" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.166445 4804 scope.go:117] "RemoveContainer" containerID="d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.166738 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.173373 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxhvp\" (UniqueName: \"kubernetes.io/projected/03844e8b-8d66-4cd7-aa19-51caa1407918-kube-api-access-zxhvp\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.173412 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.173533 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.173591 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:39.673572631 +0000 UTC m=+35.468452625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.179582 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.195578 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.200803 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.200859 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.200918 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.200954 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.200976 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.202108 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxhvp\" (UniqueName: \"kubernetes.io/projected/03844e8b-8d66-4cd7-aa19-51caa1407918-kube-api-access-zxhvp\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.213392 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.236290 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.259074 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.279369 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.297575 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.304245 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.304294 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.304308 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.304329 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.304343 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.314803 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.341549 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.358871 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.376136 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.397214 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.407553 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.407624 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.407637 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.407660 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.407675 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.432089 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.470675 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.510939 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.510992 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.511004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.511027 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.511039 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.517180 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.553268 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.594694 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.613639 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.613708 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.613728 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.613755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.613780 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.635345 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.677092 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.680775 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.680965 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.681078 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.681085 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:22:55.681036828 +0000 UTC m=+51.475916852 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.681140 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:40.681121 +0000 UTC m=+36.476001004 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.681248 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.681445 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.681603 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:55.681572875 +0000 UTC m=+51.476452869 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.717401 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.717448 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.717459 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.717473 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.717484 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.718039 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.753145 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.782258 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.782309 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.782350 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782452 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782502 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:55.78248983 +0000 UTC m=+51.577369814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782582 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782615 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782635 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782692 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:55.782673796 +0000 UTC m=+51.577553810 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782761 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782841 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782873 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.783061 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:55.783017697 +0000 UTC m=+51.577897801 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.793650 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.819663 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.819722 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.819731 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.819744 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.819759 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.835344 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.876869 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"oval\\\\nI0128 11:22:37.458146 6245 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 11:22:37.458152 6245 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 11:22:37.458185 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:37.458201 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:37.458209 6245 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:37.458208 6245 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:37.458213 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:37.458250 6245 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 11:22:37.458286 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:37.458305 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 11:22:37.458310 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:37.458318 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:37.458338 6245 factory.go:656] Stopping watch factory\\\\nI0128 11:22:37.458357 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:37.458369 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:37.458387 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.886124 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:18:42.818861418 +0000 UTC Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.913990 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.914138 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.914167 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.914249 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.914289 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.914440 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.914553 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.923404 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.923476 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.923496 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.923525 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.923546 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.027122 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.027209 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.027228 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.027256 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.027274 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.131869 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.131952 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.131967 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.131993 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.132013 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.235376 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.235571 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.235669 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.235798 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.235920 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.339452 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.339506 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.339523 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.339551 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.339570 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.443605 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.443671 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.443696 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.443734 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.443759 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.547112 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.547237 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.547257 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.547294 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.547318 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.606080 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.606145 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.606162 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.606188 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.606201 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.625518 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.630098 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.630162 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.630181 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.630210 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.630228 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.648912 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.653743 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.653785 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.653802 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.653819 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.653832 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.668279 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.672330 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.672416 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.672435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.672472 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.672491 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.690505 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.694018 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.694261 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.694378 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:42.694351591 +0000 UTC m=+38.489231665 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.696117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.696218 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.696245 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.696281 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.696310 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.716808 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.716969 4804 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.719365 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.719439 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.719460 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.719491 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.719510 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.823362 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.823404 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.823416 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.823437 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.823451 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.886560 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:05:22.704377053 +0000 UTC Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.914480 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.914646 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.927816 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.927861 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.927872 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.927902 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.927919 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.030347 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.030401 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.030415 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.030434 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.030446 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.133128 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.133172 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.133186 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.133202 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.133214 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.235665 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.235715 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.235725 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.235740 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.235751 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.338673 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.338733 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.338746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.338766 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.338784 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.442578 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.442621 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.442629 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.442645 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.442656 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.545117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.545168 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.545179 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.545196 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.545217 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.648031 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.648068 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.648076 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.648092 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.648103 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.750353 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.750399 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.750408 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.750423 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.750431 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.854214 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.854268 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.854288 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.854314 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.854333 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.886862 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:51:16.162559159 +0000 UTC Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.914716 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.914769 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:41 crc kubenswrapper[4804]: E0128 11:22:41.914998 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.915009 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:41 crc kubenswrapper[4804]: E0128 11:22:41.915285 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:41 crc kubenswrapper[4804]: E0128 11:22:41.915461 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.956462 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.956524 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.956536 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.956555 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.956569 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.058938 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.059056 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.059077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.059108 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.059133 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.161873 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.161955 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.161971 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.161995 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.162012 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.265665 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.265749 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.265784 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.265817 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.265842 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.368911 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.368963 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.368974 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.368994 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.369009 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.470973 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.471019 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.471032 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.471049 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.471061 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.573582 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.573628 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.573643 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.573663 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.573679 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.676348 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.676426 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.676452 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.676481 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.676503 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.714558 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:42 crc kubenswrapper[4804]: E0128 11:22:42.714727 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:42 crc kubenswrapper[4804]: E0128 11:22:42.714816 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:46.714789553 +0000 UTC m=+42.509669577 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.778542 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.778589 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.778603 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.778619 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.778631 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.881387 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.881478 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.881512 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.881546 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.881568 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.887660 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:04:11.617040892 +0000 UTC Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.914120 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:42 crc kubenswrapper[4804]: E0128 11:22:42.914337 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.984490 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.984523 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.984534 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.984549 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.984567 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.087604 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.087653 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.087664 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.087682 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.087694 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.190659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.190713 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.190725 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.190743 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.190755 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.294545 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.294611 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.294625 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.294651 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.294670 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.397498 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.397546 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.397569 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.397586 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.397596 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.500904 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.500951 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.500964 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.500984 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.500998 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.603903 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.603951 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.603964 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.603981 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.603993 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.706834 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.706943 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.706961 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.706992 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.707016 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.809682 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.809746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.809759 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.809776 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.809788 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.888854 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:34:02.084862268 +0000 UTC Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.912215 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.912258 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.912266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.912281 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.912291 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.914458 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.914498 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.914495 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:43 crc kubenswrapper[4804]: E0128 11:22:43.914586 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:43 crc kubenswrapper[4804]: E0128 11:22:43.914694 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:43 crc kubenswrapper[4804]: E0128 11:22:43.914791 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.015356 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.015402 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.015414 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.015432 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.015444 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.118475 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.118532 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.118542 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.118557 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.118570 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.221294 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.221330 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.221339 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.221352 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.221361 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.323738 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.323801 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.323817 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.323841 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.323857 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.427256 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.427319 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.427332 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.427351 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.427364 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.529370 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.529405 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.529413 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.529427 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.529436 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.631955 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.631987 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.631998 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.632015 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.632027 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.734642 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.734677 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.734686 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.734700 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.734711 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.836337 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.836384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.836395 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.836409 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.836418 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.889090 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:19:59.712735107 +0000 UTC Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.914472 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:44 crc kubenswrapper[4804]: E0128 11:22:44.914592 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.928795 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:44Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.939245 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.939285 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.939294 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.939310 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.939320 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.941019 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:44Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.952929 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:44Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.963997 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:44Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.975182 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:44Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.988234 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:44Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.005738 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.018178 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.033922 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.042049 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.042089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.042099 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.042113 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.042124 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.051598 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.072845 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"oval\\\\nI0128 11:22:37.458146 6245 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 11:22:37.458152 6245 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 11:22:37.458185 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:37.458201 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:37.458209 6245 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:37.458208 6245 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:37.458213 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:37.458250 6245 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 11:22:37.458286 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:37.458305 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 11:22:37.458310 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:37.458318 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:37.458338 6245 factory.go:656] Stopping watch factory\\\\nI0128 11:22:37.458357 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:37.458369 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:37.458387 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.084056 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.095785 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.112537 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.126987 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.141348 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.144985 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.145021 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.145030 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.145043 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.145053 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.155935 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.247808 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.247853 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.247863 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.247892 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.247904 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.350551 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.350598 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.350610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.350626 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.350637 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.453018 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.453101 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.453114 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.453133 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.453145 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.555309 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.555352 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.555364 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.555378 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.555390 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.658606 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.658677 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.658694 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.658719 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.658737 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.761940 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.762031 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.762044 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.762057 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.762067 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.864675 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.864716 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.864728 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.864744 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.864757 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.890237 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:51:22.77431804 +0000 UTC Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.914821 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.915061 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.914915 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:45 crc kubenswrapper[4804]: E0128 11:22:45.915197 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:45 crc kubenswrapper[4804]: E0128 11:22:45.915361 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:45 crc kubenswrapper[4804]: E0128 11:22:45.915529 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.967809 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.967923 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.967960 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.967989 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.968013 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.071351 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.071778 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.071822 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.071845 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.071859 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.175045 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.175090 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.175100 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.175119 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.175130 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.278370 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.278452 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.278474 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.278501 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.278519 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.380734 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.380775 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.380786 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.380804 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.380813 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.483602 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.483649 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.483658 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.483678 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.483690 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.586483 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.586598 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.586629 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.586668 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.586701 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.689538 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.689617 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.689637 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.689668 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.689692 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.754419 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:46 crc kubenswrapper[4804]: E0128 11:22:46.754656 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:46 crc kubenswrapper[4804]: E0128 11:22:46.754760 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:54.754731568 +0000 UTC m=+50.549611592 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.792858 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.792930 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.792943 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.792960 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.792974 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.890404 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:50:10.717008412 +0000 UTC Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.895965 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.896020 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.896029 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.896044 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.896054 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.914427 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:46 crc kubenswrapper[4804]: E0128 11:22:46.914607 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.998722 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.998751 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.998759 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.998790 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.998801 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.101799 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.101838 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.101848 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.101900 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.101920 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.204548 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.204591 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.204601 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.204616 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.204626 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.307497 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.307562 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.307604 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.307637 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.307663 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.410130 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.410171 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.410185 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.410200 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.410209 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.512850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.512941 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.512954 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.512971 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.512982 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.616309 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.616377 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.616389 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.616404 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.616414 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.719310 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.719359 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.719372 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.719389 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.719403 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.822187 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.822228 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.822240 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.822255 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.822267 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.890656 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:09:14.89369712 +0000 UTC Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.914007 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.914054 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.914112 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:47 crc kubenswrapper[4804]: E0128 11:22:47.914162 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:47 crc kubenswrapper[4804]: E0128 11:22:47.914360 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:47 crc kubenswrapper[4804]: E0128 11:22:47.914454 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.925013 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.925045 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.925071 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.925085 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.925094 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.027834 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.027903 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.027918 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.027936 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.027952 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.131257 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.131302 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.131312 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.131336 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.131348 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.239048 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.239109 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.239127 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.239147 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.239169 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.342751 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.343270 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.343297 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.343328 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.343347 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.445594 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.445640 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.445649 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.445665 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.445683 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.548260 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.548300 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.548315 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.548339 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.548353 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.651111 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.651156 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.651166 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.651181 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.651190 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.753374 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.753418 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.753430 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.753447 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.753461 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.856912 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.856963 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.856977 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.856996 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.857009 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.891652 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 04:33:06.378644846 +0000 UTC Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.914019 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:48 crc kubenswrapper[4804]: E0128 11:22:48.914164 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.960517 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.960552 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.960563 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.960579 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.960589 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.063986 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.064034 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.064044 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.064060 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.064071 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.167000 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.167097 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.167124 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.167205 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.167238 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.269947 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.269993 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.270004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.270022 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.270036 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.373515 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.373577 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.373589 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.373608 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.373620 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.476128 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.476173 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.476189 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.476208 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.476220 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.579324 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.579415 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.579427 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.579447 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.579464 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.682609 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.682653 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.682665 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.682682 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.682694 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.785355 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.785405 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.785426 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.785444 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.785456 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.888106 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.888160 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.888175 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.888195 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.888211 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.892690 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:27:52.130100626 +0000 UTC Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.914308 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.914353 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.914308 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:49 crc kubenswrapper[4804]: E0128 11:22:49.914487 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:49 crc kubenswrapper[4804]: E0128 11:22:49.914601 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:49 crc kubenswrapper[4804]: E0128 11:22:49.915006 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.915357 4804 scope.go:117] "RemoveContainer" containerID="d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.990895 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.990935 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.990947 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.990962 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.990982 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.094042 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.094090 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.094103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.094121 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.094133 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.203438 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.203480 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.203493 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.203517 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.203526 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.208215 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/1.log" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.212283 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.212502 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.231076 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.244663 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.259664 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.273441 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.287865 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.306642 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.306688 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.306700 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.306720 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.306732 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.308609 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.331739 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"oval\\\\nI0128 11:22:37.458146 6245 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 11:22:37.458152 6245 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 11:22:37.458185 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:37.458201 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:37.458209 6245 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:37.458208 6245 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:37.458213 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:37.458250 6245 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 11:22:37.458286 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:37.458305 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 11:22:37.458310 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:37.458318 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:37.458338 6245 factory.go:656] Stopping watch factory\\\\nI0128 11:22:37.458357 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:37.458369 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:37.458387 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.345427 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.370436 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.388604 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.404138 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.409089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.409148 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.409159 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.409181 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.409194 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.422248 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.438871 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.451273 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.463970 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.475546 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.492006 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.512502 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.512545 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.512559 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.512579 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.512591 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.614965 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.615035 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.615047 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.615066 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.615078 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.717431 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.717483 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.717496 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.717516 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.717528 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.821471 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.821539 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.821555 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.821584 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.821601 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.893059 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:13:29.916159951 +0000 UTC Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.914750 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:50 crc kubenswrapper[4804]: E0128 11:22:50.914980 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.925720 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.925750 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.925762 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.925782 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.925796 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.028784 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.028834 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.028844 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.028861 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.028871 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.099522 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.099562 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.099571 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.099585 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.099594 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.118254 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.122556 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.122587 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.122597 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.122613 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.122631 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.136323 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.142089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.142133 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.142143 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.142160 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.142175 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.159049 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.163153 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.163184 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.163193 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.163207 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.163218 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.182814 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.188785 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.188856 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.188870 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.188908 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.188921 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.200581 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.200714 4804 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.202308 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.202353 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.202366 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.202384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.202400 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.217754 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/2.log" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.218572 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/1.log" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.221915 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.221985 4804 scope.go:117] "RemoveContainer" containerID="d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.221907 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177" exitCode=1 Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.222958 4804 scope.go:117] "RemoveContainer" containerID="5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177" Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.223159 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.241335 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.256116 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.267235 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.279402 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.298684 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.304391 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.304425 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.304434 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.304484 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.304514 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.311583 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.327901 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.340964 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.354646 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.368299 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.379344 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.392174 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.407352 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.407405 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.407415 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.407430 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.407439 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.409069 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"oval\\\\nI0128 11:22:37.458146 6245 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 11:22:37.458152 6245 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 11:22:37.458185 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:37.458201 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:37.458209 6245 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:37.458208 6245 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:37.458213 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:37.458250 6245 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 11:22:37.458286 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:37.458305 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 11:22:37.458310 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:37.458318 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:37.458338 6245 factory.go:656] Stopping watch factory\\\\nI0128 11:22:37.458357 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:37.458369 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:37.458387 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.418553 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.431723 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.443048 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.453621 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.509565 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.509631 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.509643 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.509660 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.509669 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.612362 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.612622 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.612808 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.612940 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.613068 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.716689 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.716995 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.717035 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.717082 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.717094 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.820668 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.820712 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.820721 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.820739 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.820751 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.893808 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:44:34.113268741 +0000 UTC Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.914604 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.917004 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.917349 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.917451 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.917635 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.917865 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.923271 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.923676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.925090 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.925226 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.925340 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.028020 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.028095 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.028109 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.028131 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.028168 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.130997 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.131051 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.131060 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.131077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.131087 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.229181 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/2.log" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.232874 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.232924 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.232935 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.232950 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.232959 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.336781 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.336818 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.336830 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.336858 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.336867 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.439360 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.439397 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.439407 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.439422 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.439441 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.541875 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.541962 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.541982 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.542003 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.542015 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.644659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.644733 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.644745 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.644783 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.644794 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.747400 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.747453 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.747479 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.747502 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.747519 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.850223 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.850252 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.850260 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.850276 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.850286 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.894075 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:08:57.046788882 +0000 UTC Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.915135 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:52 crc kubenswrapper[4804]: E0128 11:22:52.915265 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.924401 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.925246 4804 scope.go:117] "RemoveContainer" containerID="5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177" Jan 28 11:22:52 crc kubenswrapper[4804]: E0128 11:22:52.925471 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.939235 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:52Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.952810 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.952840 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.952848 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.952864 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.952872 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.953441 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:52Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.967383 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:52Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.979558 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:52Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.991967 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:52Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.021631 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.035019 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.047432 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.054965 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.055017 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.055035 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.055054 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.055066 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.061013 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.071804 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.086646 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.097012 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.109508 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.126810 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.142350 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.153544 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.157324 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.157355 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.157366 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.157382 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.157394 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.165292 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.259672 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.259711 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.259722 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.259738 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.259751 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.362853 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.362944 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.362960 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.362982 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.362997 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.465844 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.465943 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.465967 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.465995 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.466014 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.568418 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.568474 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.568486 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.568504 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.568517 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.670819 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.670862 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.670876 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.670925 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.670940 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.773822 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.773866 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.773892 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.773909 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.773921 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.876536 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.876576 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.876585 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.876599 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.876608 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.894849 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:12:57.964194544 +0000 UTC Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.914464 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.914492 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.914516 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:53 crc kubenswrapper[4804]: E0128 11:22:53.914575 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:53 crc kubenswrapper[4804]: E0128 11:22:53.914722 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:53 crc kubenswrapper[4804]: E0128 11:22:53.914818 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.978996 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.979064 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.979074 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.979087 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.979098 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.081498 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.081552 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.081564 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.081581 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.081592 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.184046 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.184090 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.184102 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.184118 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.184127 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.286990 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.287030 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.287040 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.287054 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.287066 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.389783 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.389845 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.389854 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.389874 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.390042 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.492654 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.493017 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.493121 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.493220 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.493309 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.596339 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.596381 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.596394 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.596411 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.596424 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.698851 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.698898 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.698908 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.698920 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.698929 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.801116 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.801188 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.801204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.801231 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.801251 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.840001 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:54 crc kubenswrapper[4804]: E0128 11:22:54.840195 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:54 crc kubenswrapper[4804]: E0128 11:22:54.840277 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:23:10.840248435 +0000 UTC m=+66.635128419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.895816 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:31:36.562016775 +0000 UTC Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.903674 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.903705 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.903714 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.903728 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.903737 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.916731 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:54 crc kubenswrapper[4804]: E0128 11:22:54.916829 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.935355 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:54Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.954278 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:54Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.976797 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:54Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.001788 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:54Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.006643 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.006674 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.006688 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.006706 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.006719 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.021757 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.050862 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.061574 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.075268 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.095688 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.108085 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.108117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.108127 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.108142 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.108152 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.117141 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.131484 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.154210 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.171390 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.188816 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.203789 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.213227 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.213261 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.213272 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.213287 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.213298 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.220421 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.234228 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.316255 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.316292 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.316302 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.316322 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.316333 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.419092 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.419134 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.419145 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.419163 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.419173 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.522118 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.522188 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.522204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.522229 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.522242 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.625037 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.625088 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.625099 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.625117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.625131 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.727845 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.727924 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.727934 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.727962 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.727977 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.750663 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.750777 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.750864 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.750941 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:23:27.750923089 +0000 UTC m=+83.545803083 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.751120 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:23:27.751112075 +0000 UTC m=+83.545992059 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.830652 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.830934 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.831041 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.831115 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.831192 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.851421 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.851724 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.851737 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.851851 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.851869 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.851946 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:23:27.851927487 +0000 UTC m=+83.646807531 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.851806 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.851820 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.852047 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.852071 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.852144 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:23:27.852125473 +0000 UTC m=+83.647005457 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.852317 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.852439 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:23:27.852424443 +0000 UTC m=+83.647304417 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.896974 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:42:35.011588131 +0000 UTC Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.914500 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.914506 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.914636 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.914740 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.914829 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.915068 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.934156 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.934201 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.934212 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.934230 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.934242 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.036953 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.036990 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.037001 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.037054 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.037071 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.139804 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.140147 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.140245 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.140335 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.140446 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.242796 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.242846 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.242861 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.242905 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.242918 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.345257 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.345285 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.345296 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.345309 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.345318 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.447234 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.447494 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.447565 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.447643 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.447713 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.549584 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.549631 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.549645 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.549661 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.550010 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.651903 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.651953 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.651966 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.651986 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.651998 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.754779 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.755036 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.755129 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.755205 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.755265 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.857441 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.857470 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.857479 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.857492 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.857502 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.897871 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:34:59.990641386 +0000 UTC Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.914554 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:56 crc kubenswrapper[4804]: E0128 11:22:56.914689 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.960216 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.960485 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.960568 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.960686 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.960783 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.063972 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.064023 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.064036 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.064054 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.064063 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.167034 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.167079 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.167091 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.167109 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.167121 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.269605 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.269637 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.269663 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.269676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.269686 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.373077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.373176 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.373206 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.373255 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.373287 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.476579 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.476630 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.476642 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.476666 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.476679 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.579633 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.579732 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.579757 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.579792 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.579816 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.684039 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.684453 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.684609 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.684749 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.684920 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.788672 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.788711 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.788720 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.788735 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.788744 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.892199 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.892531 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.892719 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.892861 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.893053 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.898566 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:27:25.514112577 +0000 UTC Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.914902 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:57 crc kubenswrapper[4804]: E0128 11:22:57.915175 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.915058 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:57 crc kubenswrapper[4804]: E0128 11:22:57.915368 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.915032 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:57 crc kubenswrapper[4804]: E0128 11:22:57.915520 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.997604 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.997646 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.997657 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.997676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.997690 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.100433 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.100489 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.100501 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.100518 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.100530 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.203942 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.204011 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.204033 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.204062 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.204084 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.307807 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.307869 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.307911 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.307940 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.307953 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.411767 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.411825 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.411838 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.411860 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.411877 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.515267 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.515337 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.515358 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.515387 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.515408 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.619070 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.619132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.619144 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.619170 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.619185 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.722386 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.722435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.722448 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.722469 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.722485 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.826175 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.826248 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.826277 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.826310 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.826333 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.899005 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:16:09.917484049 +0000 UTC Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.914712 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:58 crc kubenswrapper[4804]: E0128 11:22:58.915145 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.929765 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.929811 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.929821 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.929839 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.929851 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.033021 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.033068 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.033078 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.033094 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.033108 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.136950 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.136997 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.137029 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.137049 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.137063 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.239801 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.239850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.239862 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.239902 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.239917 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.343298 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.343360 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.343373 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.343388 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.343401 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.447076 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.447170 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.447197 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.447229 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.447250 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.551060 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.551128 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.551147 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.551176 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.551196 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.653794 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.653839 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.653850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.653875 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.653908 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.756076 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.756120 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.756132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.756150 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.756161 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.857958 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.858004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.858012 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.858025 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.858035 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.899726 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:06:16.371302831 +0000 UTC Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.914065 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.914145 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.914065 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:59 crc kubenswrapper[4804]: E0128 11:22:59.914279 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:59 crc kubenswrapper[4804]: E0128 11:22:59.914382 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:59 crc kubenswrapper[4804]: E0128 11:22:59.914482 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.960890 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.960939 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.960948 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.960967 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.960978 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.063513 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.063567 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.063586 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.063606 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.063619 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.166635 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.166702 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.166720 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.166748 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.166765 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.229007 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.242788 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.245483 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.261860 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.270575 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.270651 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.270668 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.270691 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.270709 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.273109 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.284040 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.311361 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.326241 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.344606 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.360272 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.374107 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.374168 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.374183 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.374209 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.374223 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.374837 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.393843 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.406213 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.423516 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.453823 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.465601 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.478348 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.478386 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.478417 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.478439 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.478452 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.479180 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.492381 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.504855 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.581339 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.581409 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.581424 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.581445 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.581464 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.684660 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.684718 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.684732 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.684757 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.684777 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.788085 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.788132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.788144 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.788161 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.788174 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.890726 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.891034 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.891171 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.891261 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.891332 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.899847 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:12:03.837046259 +0000 UTC Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.914421 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:00 crc kubenswrapper[4804]: E0128 11:23:00.914584 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.993936 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.993978 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.993988 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.994004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.994013 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.096786 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.096830 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.096842 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.096860 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.096904 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.199291 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.199335 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.199345 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.199361 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.199373 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.302200 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.302255 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.302266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.302282 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.302292 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.405209 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.405266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.405277 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.405290 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.405301 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.406164 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.406190 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.406199 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.406208 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.406216 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.421405 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:01Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.425911 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.425992 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.426011 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.426033 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.426049 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.438038 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:01Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.441693 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.441748 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.441758 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.441771 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.441781 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.453013 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:01Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.456850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.456902 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.456913 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.456926 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.456937 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.471833 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:01Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.478849 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.478975 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.479056 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.479144 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.479581 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.493987 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:01Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.494117 4804 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.507783 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.508060 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.508265 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.508454 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.508636 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.610916 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.610967 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.610979 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.610994 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.611005 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.713641 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.713674 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.713682 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.713696 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.713706 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.816893 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.816939 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.816951 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.816966 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.817013 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.900770 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:35:47.237237274 +0000 UTC Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.914079 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.914131 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.914322 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.914452 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.914605 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.914653 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.918730 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.918781 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.918790 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.918805 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.918816 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.021111 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.021182 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.021195 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.021212 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.021221 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.122959 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.122999 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.123011 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.123028 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.123039 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.224963 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.224987 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.224996 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.225008 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.225017 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.327225 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.327257 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.327265 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.327278 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.327288 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.429652 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.429681 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.429689 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.429701 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.429710 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.532187 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.532233 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.532243 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.532266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.532278 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.634583 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.634622 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.634632 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.634647 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.634660 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.737204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.737245 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.737253 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.737287 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.737301 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.839403 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.839438 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.839449 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.839466 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.839477 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.901050 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:25:39.941333278 +0000 UTC Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.914484 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:02 crc kubenswrapper[4804]: E0128 11:23:02.914794 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.941699 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.941727 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.941734 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.941746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.941755 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.044055 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.044093 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.044105 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.044129 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.044140 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.149253 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.149328 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.149343 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.149363 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.149378 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.252267 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.252319 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.252333 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.252350 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.252363 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.354676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.354707 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.354717 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.354731 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.354741 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.456803 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.456839 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.456848 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.456861 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.456869 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.559043 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.559082 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.559093 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.559109 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.559122 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.661676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.661726 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.661738 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.661753 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.661763 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.764345 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.764377 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.764385 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.764418 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.764428 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.866666 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.866707 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.866720 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.866738 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.866751 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.902032 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:56:45.897331875 +0000 UTC Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.914530 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.914563 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.914557 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:03 crc kubenswrapper[4804]: E0128 11:23:03.914850 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:03 crc kubenswrapper[4804]: E0128 11:23:03.914993 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:03 crc kubenswrapper[4804]: E0128 11:23:03.915080 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.969726 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.969779 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.969792 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.969815 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.969831 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.072599 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.072933 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.073008 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.073081 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.073147 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.175736 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.175779 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.175790 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.175806 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.175820 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.277569 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.277607 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.277616 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.277631 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.277642 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.379648 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.379681 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.379689 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.379702 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.379711 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.481918 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.481956 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.481965 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.481979 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.481989 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.584280 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.584324 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.584333 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.584350 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.584361 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.686590 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.686623 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.686632 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.686650 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.686663 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.788638 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.788680 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.788693 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.788707 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.788718 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.890867 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.890917 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.890927 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.890944 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.890954 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.902175 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:53:28.646373014 +0000 UTC Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.914082 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:04 crc kubenswrapper[4804]: E0128 11:23:04.914248 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.929564 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:04Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.945264 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:04Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.961794 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:04Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.975545 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:04Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.987893 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:04Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.992560 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.992608 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.992648 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.992667 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.992678 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.011023 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.021555 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.033144 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.044550 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.055794 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.070382 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.083555 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.092863 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.095100 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.095139 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.095150 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.095168 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.095180 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.104334 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.126833 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.138504 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.153249 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.164572 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.197627 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.197683 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.197696 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.197716 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.197728 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.300125 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.300181 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.300193 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.300242 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.300255 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.403305 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.403347 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.403358 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.403375 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.403386 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.505758 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.505804 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.505817 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.505833 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.505843 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.608797 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.608877 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.608922 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.608950 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.608974 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.711589 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.711694 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.711715 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.711740 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.711759 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.814992 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.815064 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.815078 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.815099 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.815113 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.902655 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:42:58.281823731 +0000 UTC Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.914083 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.914124 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.914207 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:05 crc kubenswrapper[4804]: E0128 11:23:05.914279 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:05 crc kubenswrapper[4804]: E0128 11:23:05.914432 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:05 crc kubenswrapper[4804]: E0128 11:23:05.914564 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.916277 4804 scope.go:117] "RemoveContainer" containerID="5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177" Jan 28 11:23:05 crc kubenswrapper[4804]: E0128 11:23:05.916711 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.918458 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.918496 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.918504 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.918519 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.918529 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.020514 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.020561 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.020575 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.020593 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.020608 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.123480 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.123539 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.123554 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.123576 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.123595 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.227147 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.227213 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.227232 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.227316 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.227338 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.330431 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.330483 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.330493 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.330511 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.330523 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.434004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.434089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.434102 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.434123 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.434138 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.537687 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.537750 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.537796 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.537816 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.537831 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.640938 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.640997 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.641012 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.641029 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.641042 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.744219 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.744388 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.744410 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.744468 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.744494 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.847987 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.848040 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.848056 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.848083 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.848100 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.902847 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:32:43.812493474 +0000 UTC Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.914306 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:06 crc kubenswrapper[4804]: E0128 11:23:06.914474 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.951323 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.951363 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.951374 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.951391 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.951403 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.054865 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.054939 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.054955 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.054976 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.054993 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.158402 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.159059 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.159069 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.159085 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.159096 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.267061 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.267122 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.267137 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.267163 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.267185 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.371706 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.371754 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.371763 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.371777 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.371787 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.475571 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.475603 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.475611 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.475624 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.475632 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.578063 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.578107 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.578117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.578132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.578143 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.680482 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.680521 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.680529 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.680545 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.680556 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.783512 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.783567 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.783585 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.783617 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.783635 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.886871 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.886923 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.886936 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.886955 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.886967 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.903420 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:42:58.436851821 +0000 UTC Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.918527 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.918558 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.918581 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:07 crc kubenswrapper[4804]: E0128 11:23:07.918704 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:07 crc kubenswrapper[4804]: E0128 11:23:07.918820 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:07 crc kubenswrapper[4804]: E0128 11:23:07.918929 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.990140 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.990196 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.990210 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.990227 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.990240 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.093275 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.093351 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.093374 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.093416 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.093441 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.196064 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.196155 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.196174 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.196199 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.196215 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.298208 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.298243 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.298256 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.298272 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.298283 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.400671 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.400698 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.400706 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.400717 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.400725 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.503378 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.503435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.503451 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.503466 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.503477 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.606191 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.606286 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.606306 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.606329 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.606346 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.708764 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.708802 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.708814 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.708830 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.708843 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.811703 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.811755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.811769 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.811789 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.811803 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.903920 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:16:56.903169486 +0000 UTC Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.913975 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:08 crc kubenswrapper[4804]: E0128 11:23:08.914100 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.914138 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.914180 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.914204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.914230 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.914253 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.016618 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.016669 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.016684 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.016705 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.016727 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.119469 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.119529 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.119541 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.119558 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.119574 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.223826 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.223954 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.224020 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.224057 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.224120 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.326797 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.326837 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.326845 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.326861 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.326873 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.430648 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.430714 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.430732 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.430758 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.430776 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.533795 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.533844 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.533854 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.533874 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.533899 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.636705 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.636767 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.636778 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.636802 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.636815 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.739695 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.739760 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.739770 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.739790 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.739803 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.842948 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.842996 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.843008 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.843024 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.843035 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.904728 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:22:43.861605291 +0000 UTC Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.914184 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:09 crc kubenswrapper[4804]: E0128 11:23:09.914321 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.914528 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:09 crc kubenswrapper[4804]: E0128 11:23:09.914574 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.914670 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:09 crc kubenswrapper[4804]: E0128 11:23:09.914719 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.946002 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.946050 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.946061 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.946080 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.946092 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.048364 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.048411 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.048421 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.048440 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.048453 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.151704 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.151755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.151766 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.151786 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.151799 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.255468 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.255522 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.255532 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.255553 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.255565 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.358051 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.358095 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.358106 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.358120 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.358151 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.461673 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.461715 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.461724 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.461739 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.461752 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.564707 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.564749 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.564760 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.564774 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.564786 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.671825 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.671968 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.671993 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.672024 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.672058 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.774698 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.774741 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.774751 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.774764 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.774774 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.878179 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.878230 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.878243 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.878263 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.878276 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.905770 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 00:27:19.373393545 +0000 UTC Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.914477 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:10 crc kubenswrapper[4804]: E0128 11:23:10.914711 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.929707 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:10 crc kubenswrapper[4804]: E0128 11:23:10.929997 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:23:10 crc kubenswrapper[4804]: E0128 11:23:10.930140 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:23:42.930105995 +0000 UTC m=+98.724986009 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.980807 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.980855 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.980866 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.980905 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.980919 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.083047 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.083099 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.083115 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.083135 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.083149 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.185780 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.185833 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.185847 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.185871 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.185904 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.289589 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.289654 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.289670 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.289694 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.289708 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.392795 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.392864 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.392916 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.392956 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.392978 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.495808 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.495876 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.495926 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.495953 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.495971 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.599150 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.599238 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.599264 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.599299 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.599323 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.702162 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.702203 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.702216 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.702231 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.702242 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.727950 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.728013 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.728026 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.728050 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.728067 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.742298 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:11Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.747613 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.747665 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.747676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.747698 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.747710 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.760682 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:11Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.768428 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.768566 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.768597 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.768648 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.768669 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.785495 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:11Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.790302 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.790341 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.790353 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.790370 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.790380 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.803459 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:11Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.807593 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.807634 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.807648 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.807669 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.807684 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.820480 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:11Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.820640 4804 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.822466 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.822496 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.822509 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.822530 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.822544 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.906670 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:39:46.303024347 +0000 UTC Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.914132 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.914182 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.914285 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.914436 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.914563 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.914700 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.925547 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.925590 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.925605 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.925627 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.925643 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.028240 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.028288 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.028305 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.028326 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.028340 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.130709 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.130755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.130765 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.130780 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.130789 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.233029 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.233068 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.233077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.233093 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.233105 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.335487 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.335519 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.335530 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.335546 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.335556 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.438333 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.438364 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.438372 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.438384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.438394 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.541113 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.541149 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.541157 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.541171 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.541181 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.644599 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.644662 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.644684 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.644710 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.644728 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.748321 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.748362 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.748370 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.748385 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.748395 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.850601 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.850647 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.850659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.850676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.850689 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.907009 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:31:13.517359468 +0000 UTC Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.914331 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:12 crc kubenswrapper[4804]: E0128 11:23:12.914493 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.952694 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.952975 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.953091 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.953174 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.953253 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.055780 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.056078 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.056261 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.056358 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.056440 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.158926 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.158971 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.158984 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.159002 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.159014 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.262022 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.262062 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.262072 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.262089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.262099 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.365350 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.365397 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.365419 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.365808 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.365861 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.469265 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.469319 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.469338 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.469374 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.469394 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.572504 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.572573 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.572586 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.572609 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.572624 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.675251 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.675292 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.675303 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.675320 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.675331 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.777829 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.777862 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.777871 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.777899 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.777909 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.880078 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.880130 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.880143 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.880157 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.880168 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.907169 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:07:09.637335046 +0000 UTC Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.914598 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.914642 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.914693 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:13 crc kubenswrapper[4804]: E0128 11:23:13.914799 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:13 crc kubenswrapper[4804]: E0128 11:23:13.914922 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:13 crc kubenswrapper[4804]: E0128 11:23:13.915011 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.983294 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.983336 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.983348 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.983366 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.983379 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.085336 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.085382 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.085394 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.085435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.085445 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.187483 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.187514 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.187523 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.187540 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.187557 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.289731 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.290021 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.290089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.290172 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.290235 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.302661 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/0.log" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.302779 4804 generic.go:334] "Generic (PLEG): container finished" podID="735b7edc-6f8b-4f5f-a9ca-11964dd78266" containerID="938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7" exitCode=1 Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.302856 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerDied","Data":"938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.303363 4804 scope.go:117] "RemoveContainer" containerID="938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.315421 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.328181 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.339360 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.354154 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.375043 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.386957 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.395109 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.395152 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.395165 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.395183 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.395196 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.412063 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.428959 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.446347 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.460269 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.476785 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.490145 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.497739 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.497786 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.497798 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.497816 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.497831 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.505446 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.523389 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.544622 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.562102 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.579788 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"2026-01-28T11:22:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d\\\\n2026-01-28T11:22:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d to /host/opt/cni/bin/\\\\n2026-01-28T11:22:29Z [verbose] multus-daemon started\\\\n2026-01-28T11:22:29Z [verbose] Readiness Indicator file check\\\\n2026-01-28T11:23:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.597426 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.600367 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.600410 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.600422 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.600440 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.600454 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.706673 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.706720 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.706736 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.706755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.706765 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.808853 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.809215 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.809227 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.809241 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.809250 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.908237 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:54:31.926799745 +0000 UTC Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.912624 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.912695 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.912710 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.912732 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.912745 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.915204 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:14 crc kubenswrapper[4804]: E0128 11:23:14.915355 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.932266 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.946795 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.961537 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.977973 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"2026-01-28T11:22:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d\\\\n2026-01-28T11:22:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d to /host/opt/cni/bin/\\\\n2026-01-28T11:22:29Z [verbose] multus-daemon started\\\\n2026-01-28T11:22:29Z [verbose] Readiness Indicator file check\\\\n2026-01-28T11:23:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.991774 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.005850 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.016191 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.016238 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.016252 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.016282 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.016294 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.018502 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.028918 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.041801 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.059707 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.072596 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.084838 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.107787 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.118266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.118319 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.118329 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.118344 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.118373 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.120680 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.133937 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.146745 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.163211 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.176722 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.220444 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.220498 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.220510 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.220528 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.220542 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.307501 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/0.log" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.307562 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerStarted","Data":"888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.323508 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.323561 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.323572 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.323589 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.323601 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.326170 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.337592 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.351255 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.364222 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.376066 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.390957 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.403337 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.414413 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.426587 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.426819 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.426834 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.426844 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.426857 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.426866 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.447047 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.457851 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.470098 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.482216 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.494193 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.505114 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.516622 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.528415 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"2026-01-28T11:22:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d\\\\n2026-01-28T11:22:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d to /host/opt/cni/bin/\\\\n2026-01-28T11:22:29Z [verbose] multus-daemon started\\\\n2026-01-28T11:22:29Z [verbose] Readiness Indicator file check\\\\n2026-01-28T11:23:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:23:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.529149 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.529180 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.529190 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.529210 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.529222 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.537613 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.631503 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.631539 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.631551 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.631568 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.631580 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.734380 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.734429 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.734443 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.734463 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.734475 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.836711 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.836742 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.836754 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.836771 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.836783 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.909210 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 17:15:57.251324211 +0000 UTC Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.914281 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.914306 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.914281 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:15 crc kubenswrapper[4804]: E0128 11:23:15.914399 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:15 crc kubenswrapper[4804]: E0128 11:23:15.914463 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:15 crc kubenswrapper[4804]: E0128 11:23:15.914535 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.939176 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.939226 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.939238 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.939258 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.939271 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.041390 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.041433 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.041450 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.041473 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.041489 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.143543 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.143584 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.143596 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.143611 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.143623 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.246378 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.246415 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.246426 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.246441 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.246452 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.350056 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.350103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.350117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.350136 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.350150 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.452911 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.452961 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.452972 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.452994 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.453006 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.556324 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.556384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.556396 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.556414 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.556859 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.661894 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.661945 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.661962 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.661979 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.661992 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.764398 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.764438 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.764452 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.764467 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.764479 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.866655 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.866687 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.866695 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.866707 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.866716 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.910347 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:52:23.951560439 +0000 UTC Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.914918 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:16 crc kubenswrapper[4804]: E0128 11:23:16.915175 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.969172 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.969240 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.969259 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.969284 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.969305 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.071578 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.071621 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.071634 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.071647 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.071655 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.173375 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.173419 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.173428 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.173445 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.173456 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.276275 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.276315 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.276323 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.276338 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.276348 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.379263 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.379297 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.379306 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.379320 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.379329 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.481831 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.481876 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.481905 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.481925 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.481936 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.584977 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.585048 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.585068 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.585091 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.585107 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.686847 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.686903 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.686916 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.686934 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.686946 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.789732 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.789810 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.789835 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.789873 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.789943 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.892345 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.892402 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.892423 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.892449 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.892467 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.910917 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:19:50.103932987 +0000 UTC Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.914299 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.914324 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.914312 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:17 crc kubenswrapper[4804]: E0128 11:23:17.914477 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:17 crc kubenswrapper[4804]: E0128 11:23:17.914676 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:17 crc kubenswrapper[4804]: E0128 11:23:17.914731 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.994920 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.994974 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.994996 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.995022 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.995041 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.097778 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.097841 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.097850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.097867 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.097877 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.200530 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.200576 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.200588 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.200605 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.200614 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.303957 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.304025 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.304046 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.304251 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.304285 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.407075 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.407126 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.407140 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.407158 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.407170 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.510174 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.510211 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.510221 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.510235 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.510245 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.612342 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.612408 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.612426 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.612452 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.612469 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.714959 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.715018 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.715030 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.715043 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.715053 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.817259 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.817314 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.817331 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.817352 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.817367 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.912015 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:49:27.772322745 +0000 UTC Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.914243 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:18 crc kubenswrapper[4804]: E0128 11:23:18.914569 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.920327 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.920434 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.920447 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.920466 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.920477 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.022954 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.022995 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.023007 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.023027 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.023040 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.126924 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.126959 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.126969 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.126985 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.126996 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.229425 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.229476 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.229490 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.229508 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.229520 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.332013 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.332059 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.332095 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.332113 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.332125 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.434689 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.434725 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.434736 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.434751 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.434761 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.537420 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.537460 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.537474 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.537490 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.537501 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.639484 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.639522 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.639533 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.639550 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.639561 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.742422 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.742477 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.742494 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.742518 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.742535 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.844679 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.844721 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.844732 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.844746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.844759 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.913008 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:28:57.52845236 +0000 UTC Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.914488 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:19 crc kubenswrapper[4804]: E0128 11:23:19.914697 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.914800 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.914918 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:19 crc kubenswrapper[4804]: E0128 11:23:19.915108 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:19 crc kubenswrapper[4804]: E0128 11:23:19.915318 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.916571 4804 scope.go:117] "RemoveContainer" containerID="5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.947780 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.947842 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.947859 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.947906 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.947924 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.051962 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.052026 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.052044 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.052073 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.052090 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.156101 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.156147 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.156159 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.156180 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.156193 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.259549 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.259578 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.259587 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.259600 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.259612 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.362123 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.362175 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.362185 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.362201 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.362213 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.470651 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.470726 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.470746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.470777 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.470800 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.574587 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.574651 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.574670 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.574699 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.574720 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.677815 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.677878 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.677931 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.677967 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.677985 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.780392 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.780426 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.780435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.780450 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.780460 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.882951 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.882990 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.882999 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.883014 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.883025 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.913638 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:30:42.075548647 +0000 UTC Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.914988 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:20 crc kubenswrapper[4804]: E0128 11:23:20.915120 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.985046 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.985098 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.985108 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.985123 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.985150 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.087816 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.087845 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.087853 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.087865 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.087874 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.190614 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.190660 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.190670 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.190685 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.190696 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.293289 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.293338 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.293349 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.293367 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.293379 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.329002 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/3.log" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.329578 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/2.log" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.332409 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" exitCode=1 Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.332446 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.332480 4804 scope.go:117] "RemoveContainer" containerID="5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.333587 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:23:21 crc kubenswrapper[4804]: E0128 11:23:21.333850 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.348972 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.362575 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.374217 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.384741 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"2026-01-28T11:22:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d\\\\n2026-01-28T11:22:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d to /host/opt/cni/bin/\\\\n2026-01-28T11:22:29Z [verbose] multus-daemon started\\\\n2026-01-28T11:22:29Z [verbose] Readiness Indicator file check\\\\n2026-01-28T11:23:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:23:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.393900 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.395472 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.395499 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.395510 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.395573 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.395584 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.403481 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.415214 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.428157 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.441360 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.453987 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.470837 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:21Z\\\",\\\"message\\\":\\\"ift for endpointslice openshift-authentication/oauth-openshift-7f7vm as it is not a known egress service\\\\nI0128 11:23:21.186777 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console-operator/metrics for endpointslice openshift-console-operator/metrics-7q466 as it is not a known egress service\\\\nI0128 11:23:21.186788 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console/console for endpointslice openshift-console/console-v8bv2 as it is not a known egress service\\\\nI0128 11:23:21.186793 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-dns-operator/metrics for endpointslice openshift-dns-operator/metrics-sh7kc as it is not a known egress service\\\\nI0128 11:23:21.186799 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-operator for endpointslice openshift-machine-config-operator/machine-config-operator-g8487 as it is not a known egress service\\\\nI0128 11:23:21.186832 6864 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Namespace\\\\nI0128 11:23:21.186731 6864 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI0128 11:23:21.186973 6864 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.479934 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.489727 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.497722 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.497764 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.497774 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.497787 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.497799 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.509123 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.518990 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.529471 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.540547 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.552242 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.600126 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.600169 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.600177 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.600192 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.600203 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.701990 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.702024 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.702032 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.702046 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.702056 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.804752 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.804807 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.804825 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.804849 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.804869 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.908000 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.908065 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.908108 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.908136 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.908168 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.913896 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:31:38.777098005 +0000 UTC Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.913989 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.914000 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.914052 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:21 crc kubenswrapper[4804]: E0128 11:23:21.914125 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:21 crc kubenswrapper[4804]: E0128 11:23:21.914201 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:21 crc kubenswrapper[4804]: E0128 11:23:21.914295 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.008803 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.008855 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.008869 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.008915 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.008929 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.026730 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.030823 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.030853 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.030865 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.030897 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.030911 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.043100 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.047755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.047798 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.047834 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.047852 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.047865 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.062770 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.066316 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.066357 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.066368 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.066386 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.066398 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.079173 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.082906 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.082952 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.082964 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.082981 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.082993 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.096194 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.096355 4804 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.098073 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.098117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.098134 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.098154 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.098166 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.201483 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.201529 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.201542 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.201564 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.201579 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.304164 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.304205 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.304214 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.304234 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.304249 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.338444 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/3.log" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.406699 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.406741 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.406750 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.406764 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.406774 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.509179 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.509217 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.509227 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.509243 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.509254 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.612323 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.612400 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.612419 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.612448 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.612462 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.714604 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.714644 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.714653 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.714667 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.714677 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.817960 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.817998 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.818008 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.818027 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.818040 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.914499 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:41:10.88088094 +0000 UTC Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.914660 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.914807 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.920734 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.920763 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.920774 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.920791 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.920803 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.925564 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.927697 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.927986 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.941805 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"2026-01-28T11:22:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d\\\\n2026-01-28T11:22:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d to /host/opt/cni/bin/\\\\n2026-01-28T11:22:29Z [verbose] multus-daemon started\\\\n2026-01-28T11:22:29Z [verbose] Readiness Indicator file check\\\\n2026-01-28T11:23:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:23:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.961723 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.977586 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.998224 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.013339 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.023430 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.023478 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.023494 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.023516 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.023532 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.033741 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.057620 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:21Z\\\",\\\"message\\\":\\\"ift for endpointslice openshift-authentication/oauth-openshift-7f7vm as it is not a known egress service\\\\nI0128 11:23:21.186777 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console-operator/metrics for endpointslice openshift-console-operator/metrics-7q466 as it is not a known egress service\\\\nI0128 11:23:21.186788 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console/console for endpointslice openshift-console/console-v8bv2 as it is not a known egress service\\\\nI0128 11:23:21.186793 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-dns-operator/metrics for endpointslice openshift-dns-operator/metrics-sh7kc as it is not a known egress service\\\\nI0128 11:23:21.186799 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-operator for endpointslice openshift-machine-config-operator/machine-config-operator-g8487 as it is not a known egress service\\\\nI0128 11:23:21.186832 6864 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Namespace\\\\nI0128 11:23:21.186731 6864 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI0128 11:23:21.186973 6864 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:23:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.076163 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.087537 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.109234 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.119795 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.125747 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.125788 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.125802 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.125822 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.125837 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.131647 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.141799 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.153187 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.163182 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.175371 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.187645 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.199351 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.228052 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.228091 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.228103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.228119 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.228132 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.329908 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.330017 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.330031 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.330048 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.330058 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.432361 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.432407 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.432417 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.432433 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.432444 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.534956 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.534997 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.535007 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.535022 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.535032 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.637965 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.638004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.638015 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.638034 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.638074 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.741038 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.741069 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.741080 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.741103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.741114 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.844524 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.844579 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.844595 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.844617 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.844633 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.914372 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:23 crc kubenswrapper[4804]: E0128 11:23:23.914561 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.914629 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.914707 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:15:16.198951024 +0000 UTC Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.914686 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:23 crc kubenswrapper[4804]: E0128 11:23:23.914934 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:23 crc kubenswrapper[4804]: E0128 11:23:23.915029 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.948312 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.948369 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.948380 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.948403 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.948421 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.051957 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.052009 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.052018 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.052032 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.052043 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.154976 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.155048 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.155063 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.155089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.155107 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.257038 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.257087 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.257098 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.257116 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.257128 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.360608 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.360697 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.360717 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.360744 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.360772 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.464679 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.464740 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.464753 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.464778 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.464793 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.568507 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.568581 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.568605 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.568638 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.568663 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.671734 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.671792 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.671810 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.671835 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.671853 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.775486 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.775522 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.775533 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.775551 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.775564 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.879529 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.879601 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.879622 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.879650 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.879670 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.914850 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.915871 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:39:02.677131783 +0000 UTC Jan 28 11:23:24 crc kubenswrapper[4804]: E0128 11:23:24.916038 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.938423 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:24Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.950960 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:24Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.963768 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:24Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.981982 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.982025 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.982039 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.982061 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.982076 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.987172 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:24Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.004141 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.023658 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.043329 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.065314 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.085098 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.085330 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.085385 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.085399 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.085423 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.085440 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.100636 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.116436 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"2026-01-28T11:22:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d\\\\n2026-01-28T11:22:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d to /host/opt/cni/bin/\\\\n2026-01-28T11:22:29Z [verbose] multus-daemon started\\\\n2026-01-28T11:22:29Z [verbose] Readiness Indicator file check\\\\n2026-01-28T11:23:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:23:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.127579 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.154271 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:21Z\\\",\\\"message\\\":\\\"ift for endpointslice openshift-authentication/oauth-openshift-7f7vm as it is not a known egress service\\\\nI0128 11:23:21.186777 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console-operator/metrics for endpointslice openshift-console-operator/metrics-7q466 as it is not a known egress service\\\\nI0128 11:23:21.186788 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console/console for endpointslice openshift-console/console-v8bv2 as it is not a known egress service\\\\nI0128 11:23:21.186793 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-dns-operator/metrics for endpointslice openshift-dns-operator/metrics-sh7kc as it is not a known egress service\\\\nI0128 11:23:21.186799 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-operator for endpointslice openshift-machine-config-operator/machine-config-operator-g8487 as it is not a known egress service\\\\nI0128 11:23:21.186832 6864 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Namespace\\\\nI0128 11:23:21.186731 6864 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI0128 11:23:21.186973 6864 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:23:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.164176 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.177730 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.188144 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.188187 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.188198 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.188215 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.188226 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.190497 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.204455 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.219599 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.290636 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.290686 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.290698 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.290716 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.290729 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.393558 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.393604 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.393617 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.393633 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.393645 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.495443 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.495491 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.495501 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.495517 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.495526 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.597662 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.597721 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.597730 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.597747 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.597756 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.700037 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.700130 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.700154 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.700184 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.700207 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.802924 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.802992 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.803007 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.803038 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.803054 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.905902 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.905948 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.905962 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.905981 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.905992 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.914110 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.914249 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:25 crc kubenswrapper[4804]: E0128 11:23:25.914316 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.914405 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:25 crc kubenswrapper[4804]: E0128 11:23:25.914556 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:25 crc kubenswrapper[4804]: E0128 11:23:25.914687 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.916849 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:49:18.671799004 +0000 UTC Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.008334 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.008400 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.008415 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.008435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.008448 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.111025 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.111067 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.111074 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.111090 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.111100 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.213775 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.213829 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.213848 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.213873 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.213935 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.316431 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.316486 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.316502 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.316523 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.316539 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.419233 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.419297 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.419313 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.419331 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.419343 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.522162 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.522202 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.522211 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.522225 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.522234 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.624204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.624248 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.624256 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.624303 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.624315 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.726730 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.726777 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.726789 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.726805 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.726817 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.828514 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.828551 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.828560 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.828575 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.828585 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.914227 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:26 crc kubenswrapper[4804]: E0128 11:23:26.914376 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.917553 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 23:43:53.820232734 +0000 UTC Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.931240 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.931283 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.931294 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.931305 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.931314 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.033290 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.033326 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.033336 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.033350 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.033361 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.135472 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.135504 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.135512 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.135526 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.135536 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.238315 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.238384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.238396 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.238419 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.238431 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.341130 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.341168 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.341180 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.341195 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.341206 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.443940 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.443999 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.444019 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.444042 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.444059 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.546258 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.546300 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.546312 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.546331 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.546344 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.648585 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.648643 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.648659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.648685 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.648702 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.751072 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.751123 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.751136 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.751152 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.751164 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.803014 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.803117 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.803199 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.80317443 +0000 UTC m=+147.598054424 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.803204 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.803256 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.803244092 +0000 UTC m=+147.598124086 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.853789 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.853851 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.853868 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.853937 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.853956 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.903979 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.904040 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.904082 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904176 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904181 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904191 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904211 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904222 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904230 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904237 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904238 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.904222923 +0000 UTC m=+147.699102917 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904297 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.904280035 +0000 UTC m=+147.699160039 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904330 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.904318436 +0000 UTC m=+147.699198440 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.914062 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.914106 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.914079 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.914204 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.914443 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.914534 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.918153 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:55:17.651089784 +0000 UTC Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.927092 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.958163 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.958196 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.958205 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.958219 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.958232 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.060677 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.060742 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.060755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.060775 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.060789 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.163181 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.163248 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.163266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.163292 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.163311 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.266649 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.266706 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.266724 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.266754 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.266778 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.370305 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.370613 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.370621 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.370634 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.370644 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.475958 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.476039 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.476066 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.476089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.476107 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.579862 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.579927 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.579944 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.579966 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.579977 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.683357 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.683418 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.683432 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.683465 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.683486 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.786610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.786645 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.786656 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.786672 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.786691 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.889607 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.889659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.889682 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.889705 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.889720 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.914539 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:28 crc kubenswrapper[4804]: E0128 11:23:28.914870 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.918250 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:55:50.393183484 +0000 UTC Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.992800 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.992906 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.992923 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.992943 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.992955 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.095512 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.095553 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.095562 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.095577 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.095589 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.197326 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.197360 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.197368 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.197382 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.197391 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.299992 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.300035 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.300045 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.300058 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.300069 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.402315 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.402354 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.402366 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.402384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.402397 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.504481 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.504518 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.504529 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.504544 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.504554 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.607029 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.607065 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.607077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.607093 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.607106 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.709218 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.709266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.709277 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.709295 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.709309 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.812023 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.812087 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.812098 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.812116 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.812129 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.913988 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.914024 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.913988 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:29 crc kubenswrapper[4804]: E0128 11:23:29.914180 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:29 crc kubenswrapper[4804]: E0128 11:23:29.914268 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:29 crc kubenswrapper[4804]: E0128 11:23:29.914441 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.915079 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.915115 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.915129 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.915146 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.915159 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.918549 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:37:43.572632266 +0000 UTC Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.017920 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.017980 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.018072 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.018103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.018207 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.121124 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.121199 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.121228 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.121262 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.121288 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.224808 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.224918 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.224934 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.224951 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.224965 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.327474 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.327523 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.327539 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.327560 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.327576 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.429785 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.429825 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.429834 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.429850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.429863 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.531444 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.531479 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.531488 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.531502 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.531511 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.633814 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.633875 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.633901 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.633917 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.633931 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.735610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.735646 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.735659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.735672 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.735683 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.838516 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.838590 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.838610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.838641 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.838659 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.914188 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:30 crc kubenswrapper[4804]: E0128 11:23:30.914346 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.919109 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:40:35.217958687 +0000 UTC Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.942107 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.942146 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.942156 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.942172 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.942182 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.044571 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.044607 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.044615 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.044629 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.044638 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.148104 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.148157 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.148169 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.148187 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.148199 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.251543 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.251583 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.251592 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.251608 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.251620 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.355077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.355158 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.355178 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.355205 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.355225 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.460869 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.461691 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.461738 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.461985 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.462048 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.565060 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.565126 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.565155 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.565185 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.565210 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.667704 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.667772 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.667783 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.667800 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.667811 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.770759 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.770803 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.770829 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.770850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.770867 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.873986 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.874034 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.874049 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.874070 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.874087 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.914582 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:31 crc kubenswrapper[4804]: E0128 11:23:31.914761 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.915043 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:31 crc kubenswrapper[4804]: E0128 11:23:31.915112 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.915212 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:31 crc kubenswrapper[4804]: E0128 11:23:31.915393 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.919462 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:23:54.513521448 +0000 UTC Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.978088 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.978361 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.978606 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.978806 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.979013 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.082590 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.083279 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.083495 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.083728 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.083941 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:32Z","lastTransitionTime":"2026-01-28T11:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.187678 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.187749 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.187770 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.187799 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.187822 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:32Z","lastTransitionTime":"2026-01-28T11:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.291048 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.292061 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.292268 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.292434 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.292562 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:32Z","lastTransitionTime":"2026-01-28T11:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.395166 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.395215 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.395228 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.395247 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.395262 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:32Z","lastTransitionTime":"2026-01-28T11:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.400778 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.400837 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.400849 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.400865 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.400889 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:32Z","lastTransitionTime":"2026-01-28T11:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.459117 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj"] Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.461578 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.464652 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.464791 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.464832 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.465182 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.516444 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=69.5164198 podStartE2EDuration="1m9.5164198s" podCreationTimestamp="2026-01-28 11:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.503270054 +0000 UTC m=+88.298150048" watchObservedRunningTime="2026-01-28 11:23:32.5164198 +0000 UTC m=+88.311299804" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.517202 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=32.517192494 podStartE2EDuration="32.517192494s" podCreationTimestamp="2026-01-28 11:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.516902665 +0000 UTC m=+88.311782649" watchObservedRunningTime="2026-01-28 11:23:32.517192494 +0000 UTC m=+88.312072498" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.550018 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eced286-7c46-4520-99c1-b8b7225d9c72-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.550081 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4eced286-7c46-4520-99c1-b8b7225d9c72-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.550114 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4eced286-7c46-4520-99c1-b8b7225d9c72-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.550201 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4eced286-7c46-4520-99c1-b8b7225d9c72-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.550729 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eced286-7c46-4520-99c1-b8b7225d9c72-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.598702 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-r6hvc" podStartSLOduration=68.598625361 podStartE2EDuration="1m8.598625361s" podCreationTimestamp="2026-01-28 11:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.579582933 +0000 UTC m=+88.374462927" watchObservedRunningTime="2026-01-28 11:23:32.598625361 +0000 UTC m=+88.393505355" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.616711 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" podStartSLOduration=66.616676259 podStartE2EDuration="1m6.616676259s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.599632592 +0000 UTC m=+88.394512596" watchObservedRunningTime="2026-01-28 11:23:32.616676259 +0000 UTC m=+88.411556243" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.631574 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.631547598 podStartE2EDuration="1m8.631547598s" podCreationTimestamp="2026-01-28 11:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.617750482 +0000 UTC m=+88.412630466" watchObservedRunningTime="2026-01-28 11:23:32.631547598 +0000 UTC m=+88.426427582" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652109 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4eced286-7c46-4520-99c1-b8b7225d9c72-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652176 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eced286-7c46-4520-99c1-b8b7225d9c72-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652231 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eced286-7c46-4520-99c1-b8b7225d9c72-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652256 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4eced286-7c46-4520-99c1-b8b7225d9c72-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652284 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4eced286-7c46-4520-99c1-b8b7225d9c72-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652362 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4eced286-7c46-4520-99c1-b8b7225d9c72-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652677 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4eced286-7c46-4520-99c1-b8b7225d9c72-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.653332 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4eced286-7c46-4520-99c1-b8b7225d9c72-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.673908 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eced286-7c46-4520-99c1-b8b7225d9c72-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.676683 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eced286-7c46-4520-99c1-b8b7225d9c72-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.689098 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.689071476 podStartE2EDuration="5.689071476s" podCreationTimestamp="2026-01-28 11:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.665790456 +0000 UTC m=+88.460670430" watchObservedRunningTime="2026-01-28 11:23:32.689071476 +0000 UTC m=+88.483951460" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.689638 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lqqmt" podStartSLOduration=67.689633333 podStartE2EDuration="1m7.689633333s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.689071956 +0000 UTC m=+88.483951960" watchObservedRunningTime="2026-01-28 11:23:32.689633333 +0000 UTC m=+88.484513317" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.744232 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.74421632 podStartE2EDuration="1m8.74421632s" podCreationTimestamp="2026-01-28 11:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.725259084 +0000 UTC m=+88.520139058" watchObservedRunningTime="2026-01-28 11:23:32.74421632 +0000 UTC m=+88.539096304" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.758268 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podStartSLOduration=67.758244743 podStartE2EDuration="1m7.758244743s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.757787079 +0000 UTC m=+88.552667073" watchObservedRunningTime="2026-01-28 11:23:32.758244743 +0000 UTC m=+88.553124727" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.777417 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.780342 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" podStartSLOduration=67.780329366 podStartE2EDuration="1m7.780329366s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.780167491 +0000 UTC m=+88.575047475" watchObservedRunningTime="2026-01-28 11:23:32.780329366 +0000 UTC m=+88.575209350" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.825164 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-v88kz" podStartSLOduration=66.825140161 podStartE2EDuration="1m6.825140161s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.824250363 +0000 UTC m=+88.619130347" watchObservedRunningTime="2026-01-28 11:23:32.825140161 +0000 UTC m=+88.620020145" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.914213 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:32 crc kubenswrapper[4804]: E0128 11:23:32.914438 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.920159 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:05:17.292351006 +0000 UTC Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.920247 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.929937 4804 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 11:23:33 crc kubenswrapper[4804]: I0128 11:23:33.379324 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" event={"ID":"4eced286-7c46-4520-99c1-b8b7225d9c72","Type":"ContainerStarted","Data":"9be897b264fbbca8adb89640387e9e927dc46a4f873e9762d31504daa3efc583"} Jan 28 11:23:33 crc kubenswrapper[4804]: I0128 11:23:33.379373 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" event={"ID":"4eced286-7c46-4520-99c1-b8b7225d9c72","Type":"ContainerStarted","Data":"4692fd01b10e57950b4b3dc2f5edd63e8e7f02bf76ca3f18c1bdfe29a3880520"} Jan 28 11:23:33 crc kubenswrapper[4804]: I0128 11:23:33.914538 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:33 crc kubenswrapper[4804]: I0128 11:23:33.914554 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:33 crc kubenswrapper[4804]: I0128 11:23:33.914555 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:33 crc kubenswrapper[4804]: E0128 11:23:33.914858 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:33 crc kubenswrapper[4804]: E0128 11:23:33.914947 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:33 crc kubenswrapper[4804]: E0128 11:23:33.915070 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:34 crc kubenswrapper[4804]: I0128 11:23:34.914601 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:34 crc kubenswrapper[4804]: E0128 11:23:34.915593 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:35 crc kubenswrapper[4804]: I0128 11:23:35.913927 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:35 crc kubenswrapper[4804]: I0128 11:23:35.914245 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:35 crc kubenswrapper[4804]: E0128 11:23:35.914235 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:35 crc kubenswrapper[4804]: I0128 11:23:35.914338 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:35 crc kubenswrapper[4804]: E0128 11:23:35.914639 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:35 crc kubenswrapper[4804]: E0128 11:23:35.914817 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:36 crc kubenswrapper[4804]: I0128 11:23:36.914366 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:36 crc kubenswrapper[4804]: E0128 11:23:36.914558 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:36 crc kubenswrapper[4804]: I0128 11:23:36.915325 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:23:36 crc kubenswrapper[4804]: E0128 11:23:36.915470 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:23:37 crc kubenswrapper[4804]: I0128 11:23:37.914760 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:37 crc kubenswrapper[4804]: I0128 11:23:37.914907 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:37 crc kubenswrapper[4804]: E0128 11:23:37.914992 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:37 crc kubenswrapper[4804]: I0128 11:23:37.915113 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:37 crc kubenswrapper[4804]: E0128 11:23:37.915264 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:37 crc kubenswrapper[4804]: E0128 11:23:37.915496 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:38 crc kubenswrapper[4804]: I0128 11:23:38.915000 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:38 crc kubenswrapper[4804]: E0128 11:23:38.915297 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:39 crc kubenswrapper[4804]: I0128 11:23:39.914863 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:39 crc kubenswrapper[4804]: I0128 11:23:39.914904 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:39 crc kubenswrapper[4804]: I0128 11:23:39.915047 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:39 crc kubenswrapper[4804]: E0128 11:23:39.915096 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:39 crc kubenswrapper[4804]: E0128 11:23:39.915086 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:39 crc kubenswrapper[4804]: E0128 11:23:39.915543 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:40 crc kubenswrapper[4804]: I0128 11:23:40.914573 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:40 crc kubenswrapper[4804]: E0128 11:23:40.914866 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:41 crc kubenswrapper[4804]: I0128 11:23:41.914770 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:41 crc kubenswrapper[4804]: I0128 11:23:41.914770 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:41 crc kubenswrapper[4804]: E0128 11:23:41.914960 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:41 crc kubenswrapper[4804]: E0128 11:23:41.915006 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:41 crc kubenswrapper[4804]: I0128 11:23:41.914815 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:41 crc kubenswrapper[4804]: E0128 11:23:41.915066 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:42 crc kubenswrapper[4804]: I0128 11:23:42.914272 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:42 crc kubenswrapper[4804]: E0128 11:23:42.914528 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:42 crc kubenswrapper[4804]: I0128 11:23:42.962647 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:42 crc kubenswrapper[4804]: E0128 11:23:42.962865 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:23:42 crc kubenswrapper[4804]: E0128 11:23:42.962991 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:46.962965572 +0000 UTC m=+162.757845596 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:23:43 crc kubenswrapper[4804]: I0128 11:23:43.914286 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:43 crc kubenswrapper[4804]: I0128 11:23:43.914324 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:43 crc kubenswrapper[4804]: E0128 11:23:43.914623 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:43 crc kubenswrapper[4804]: I0128 11:23:43.914719 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:43 crc kubenswrapper[4804]: E0128 11:23:43.914862 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:43 crc kubenswrapper[4804]: E0128 11:23:43.915073 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:44 crc kubenswrapper[4804]: I0128 11:23:44.914141 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:44 crc kubenswrapper[4804]: E0128 11:23:44.916181 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:45 crc kubenswrapper[4804]: I0128 11:23:45.914156 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:45 crc kubenswrapper[4804]: I0128 11:23:45.914220 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:45 crc kubenswrapper[4804]: E0128 11:23:45.914295 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:45 crc kubenswrapper[4804]: I0128 11:23:45.914231 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:45 crc kubenswrapper[4804]: E0128 11:23:45.914360 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:45 crc kubenswrapper[4804]: E0128 11:23:45.914484 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:46 crc kubenswrapper[4804]: I0128 11:23:46.914678 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:46 crc kubenswrapper[4804]: E0128 11:23:46.914995 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:47 crc kubenswrapper[4804]: I0128 11:23:47.915223 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:47 crc kubenswrapper[4804]: I0128 11:23:47.915376 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:47 crc kubenswrapper[4804]: I0128 11:23:47.915238 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:47 crc kubenswrapper[4804]: E0128 11:23:47.915485 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:47 crc kubenswrapper[4804]: E0128 11:23:47.915659 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:47 crc kubenswrapper[4804]: E0128 11:23:47.915845 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:48 crc kubenswrapper[4804]: I0128 11:23:48.914281 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:48 crc kubenswrapper[4804]: E0128 11:23:48.915071 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:48 crc kubenswrapper[4804]: I0128 11:23:48.916219 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:23:48 crc kubenswrapper[4804]: E0128 11:23:48.916482 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:23:49 crc kubenswrapper[4804]: I0128 11:23:49.914770 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:49 crc kubenswrapper[4804]: I0128 11:23:49.914862 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:49 crc kubenswrapper[4804]: I0128 11:23:49.915050 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:49 crc kubenswrapper[4804]: E0128 11:23:49.915187 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:49 crc kubenswrapper[4804]: E0128 11:23:49.915315 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:49 crc kubenswrapper[4804]: E0128 11:23:49.915452 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:50 crc kubenswrapper[4804]: I0128 11:23:50.914678 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:50 crc kubenswrapper[4804]: E0128 11:23:50.914977 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:51 crc kubenswrapper[4804]: I0128 11:23:51.915080 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:51 crc kubenswrapper[4804]: E0128 11:23:51.915275 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:51 crc kubenswrapper[4804]: I0128 11:23:51.915619 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:51 crc kubenswrapper[4804]: E0128 11:23:51.915708 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:51 crc kubenswrapper[4804]: I0128 11:23:51.916053 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:51 crc kubenswrapper[4804]: E0128 11:23:51.916223 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:52 crc kubenswrapper[4804]: I0128 11:23:52.914268 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:52 crc kubenswrapper[4804]: E0128 11:23:52.914700 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:53 crc kubenswrapper[4804]: I0128 11:23:53.915035 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:53 crc kubenswrapper[4804]: I0128 11:23:53.915084 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:53 crc kubenswrapper[4804]: E0128 11:23:53.915177 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:53 crc kubenswrapper[4804]: I0128 11:23:53.915197 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:53 crc kubenswrapper[4804]: E0128 11:23:53.915273 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:53 crc kubenswrapper[4804]: E0128 11:23:53.915337 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:54 crc kubenswrapper[4804]: I0128 11:23:54.914370 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:54 crc kubenswrapper[4804]: E0128 11:23:54.916468 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:55 crc kubenswrapper[4804]: I0128 11:23:55.914808 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:55 crc kubenswrapper[4804]: I0128 11:23:55.914927 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:55 crc kubenswrapper[4804]: I0128 11:23:55.914832 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:55 crc kubenswrapper[4804]: E0128 11:23:55.915031 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:55 crc kubenswrapper[4804]: E0128 11:23:55.915169 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:55 crc kubenswrapper[4804]: E0128 11:23:55.915385 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:56 crc kubenswrapper[4804]: I0128 11:23:56.914439 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:56 crc kubenswrapper[4804]: E0128 11:23:56.914722 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:57 crc kubenswrapper[4804]: I0128 11:23:57.914646 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:57 crc kubenswrapper[4804]: I0128 11:23:57.914748 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:57 crc kubenswrapper[4804]: I0128 11:23:57.914855 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:57 crc kubenswrapper[4804]: E0128 11:23:57.914959 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:57 crc kubenswrapper[4804]: E0128 11:23:57.915345 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:57 crc kubenswrapper[4804]: E0128 11:23:57.915444 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:58 crc kubenswrapper[4804]: I0128 11:23:58.931122 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:58 crc kubenswrapper[4804]: E0128 11:23:58.931477 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:59 crc kubenswrapper[4804]: I0128 11:23:59.914677 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:59 crc kubenswrapper[4804]: E0128 11:23:59.914803 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:59 crc kubenswrapper[4804]: I0128 11:23:59.915007 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:59 crc kubenswrapper[4804]: E0128 11:23:59.915058 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:59 crc kubenswrapper[4804]: I0128 11:23:59.915256 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:59 crc kubenswrapper[4804]: E0128 11:23:59.915574 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.486775 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/1.log" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.487329 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/0.log" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.487378 4804 generic.go:334] "Generic (PLEG): container finished" podID="735b7edc-6f8b-4f5f-a9ca-11964dd78266" containerID="888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d" exitCode=1 Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.487419 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerDied","Data":"888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d"} Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.487465 4804 scope.go:117] "RemoveContainer" containerID="938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.488395 4804 scope.go:117] "RemoveContainer" containerID="888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d" Jan 28 11:24:00 crc kubenswrapper[4804]: E0128 11:24:00.488631 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lqqmt_openshift-multus(735b7edc-6f8b-4f5f-a9ca-11964dd78266)\"" pod="openshift-multus/multus-lqqmt" podUID="735b7edc-6f8b-4f5f-a9ca-11964dd78266" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.516768 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" podStartSLOduration=95.516749358 podStartE2EDuration="1m35.516749358s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:33.401699498 +0000 UTC m=+89.196579512" watchObservedRunningTime="2026-01-28 11:24:00.516749358 +0000 UTC m=+116.311629352" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.914970 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:00 crc kubenswrapper[4804]: E0128 11:24:00.915187 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.916422 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:24:00 crc kubenswrapper[4804]: E0128 11:24:00.916682 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:24:01 crc kubenswrapper[4804]: I0128 11:24:01.493434 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/1.log" Jan 28 11:24:01 crc kubenswrapper[4804]: I0128 11:24:01.914572 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:01 crc kubenswrapper[4804]: I0128 11:24:01.914644 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:01 crc kubenswrapper[4804]: I0128 11:24:01.914644 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:01 crc kubenswrapper[4804]: E0128 11:24:01.915098 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:01 crc kubenswrapper[4804]: E0128 11:24:01.915452 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:01 crc kubenswrapper[4804]: E0128 11:24:01.915793 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:02 crc kubenswrapper[4804]: I0128 11:24:02.914192 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:02 crc kubenswrapper[4804]: E0128 11:24:02.914397 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:03 crc kubenswrapper[4804]: I0128 11:24:03.914923 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:03 crc kubenswrapper[4804]: I0128 11:24:03.914967 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:03 crc kubenswrapper[4804]: I0128 11:24:03.914951 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:03 crc kubenswrapper[4804]: E0128 11:24:03.915140 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:03 crc kubenswrapper[4804]: E0128 11:24:03.915365 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:03 crc kubenswrapper[4804]: E0128 11:24:03.915486 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:04 crc kubenswrapper[4804]: I0128 11:24:04.914773 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:04 crc kubenswrapper[4804]: E0128 11:24:04.916325 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:04 crc kubenswrapper[4804]: E0128 11:24:04.939469 4804 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 11:24:05 crc kubenswrapper[4804]: E0128 11:24:05.003254 4804 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 11:24:05 crc kubenswrapper[4804]: I0128 11:24:05.914483 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:05 crc kubenswrapper[4804]: I0128 11:24:05.914560 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:05 crc kubenswrapper[4804]: E0128 11:24:05.914620 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:05 crc kubenswrapper[4804]: I0128 11:24:05.914560 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:05 crc kubenswrapper[4804]: E0128 11:24:05.914809 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:05 crc kubenswrapper[4804]: E0128 11:24:05.914942 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:06 crc kubenswrapper[4804]: I0128 11:24:06.914578 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:06 crc kubenswrapper[4804]: E0128 11:24:06.914727 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:07 crc kubenswrapper[4804]: I0128 11:24:07.914471 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:07 crc kubenswrapper[4804]: I0128 11:24:07.914571 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:07 crc kubenswrapper[4804]: E0128 11:24:07.914634 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:07 crc kubenswrapper[4804]: I0128 11:24:07.914673 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:07 crc kubenswrapper[4804]: E0128 11:24:07.914808 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:07 crc kubenswrapper[4804]: E0128 11:24:07.914848 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:08 crc kubenswrapper[4804]: I0128 11:24:08.915043 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:08 crc kubenswrapper[4804]: E0128 11:24:08.915253 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:09 crc kubenswrapper[4804]: I0128 11:24:09.914728 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:09 crc kubenswrapper[4804]: I0128 11:24:09.914802 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:09 crc kubenswrapper[4804]: I0128 11:24:09.914857 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:09 crc kubenswrapper[4804]: E0128 11:24:09.915074 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:09 crc kubenswrapper[4804]: E0128 11:24:09.915186 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:09 crc kubenswrapper[4804]: E0128 11:24:09.915386 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:10 crc kubenswrapper[4804]: E0128 11:24:10.005100 4804 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 11:24:10 crc kubenswrapper[4804]: I0128 11:24:10.914303 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:10 crc kubenswrapper[4804]: E0128 11:24:10.914587 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:11 crc kubenswrapper[4804]: I0128 11:24:11.915087 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:11 crc kubenswrapper[4804]: E0128 11:24:11.915484 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:11 crc kubenswrapper[4804]: I0128 11:24:11.915119 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:11 crc kubenswrapper[4804]: I0128 11:24:11.915096 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:11 crc kubenswrapper[4804]: E0128 11:24:11.915606 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:11 crc kubenswrapper[4804]: I0128 11:24:11.915614 4804 scope.go:117] "RemoveContainer" containerID="888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d" Jan 28 11:24:11 crc kubenswrapper[4804]: E0128 11:24:11.915866 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:12 crc kubenswrapper[4804]: I0128 11:24:12.538765 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/1.log" Jan 28 11:24:12 crc kubenswrapper[4804]: I0128 11:24:12.539492 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerStarted","Data":"c01bb0098ca9990666b7c354aacae06dac49b570cdc5308064b10a0988abe4cb"} Jan 28 11:24:12 crc kubenswrapper[4804]: I0128 11:24:12.914957 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:12 crc kubenswrapper[4804]: E0128 11:24:12.915197 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:13 crc kubenswrapper[4804]: I0128 11:24:13.914597 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:13 crc kubenswrapper[4804]: I0128 11:24:13.914644 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:13 crc kubenswrapper[4804]: I0128 11:24:13.914860 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:13 crc kubenswrapper[4804]: E0128 11:24:13.914844 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:13 crc kubenswrapper[4804]: E0128 11:24:13.915441 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:13 crc kubenswrapper[4804]: E0128 11:24:13.915340 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:14 crc kubenswrapper[4804]: I0128 11:24:14.915107 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:14 crc kubenswrapper[4804]: E0128 11:24:14.916935 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:14 crc kubenswrapper[4804]: I0128 11:24:14.917582 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:24:15 crc kubenswrapper[4804]: E0128 11:24:15.006065 4804 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.554406 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/3.log" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.557168 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.558256 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.591707 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podStartSLOduration=110.591681649 podStartE2EDuration="1m50.591681649s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:15.591233945 +0000 UTC m=+131.386113929" watchObservedRunningTime="2026-01-28 11:24:15.591681649 +0000 UTC m=+131.386561643" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.818933 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bgqd8"] Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.819054 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:15 crc kubenswrapper[4804]: E0128 11:24:15.819133 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.914796 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:15 crc kubenswrapper[4804]: E0128 11:24:15.915002 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.914815 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:15 crc kubenswrapper[4804]: E0128 11:24:15.915328 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.915824 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:15 crc kubenswrapper[4804]: E0128 11:24:15.915971 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:17 crc kubenswrapper[4804]: I0128 11:24:17.914368 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:17 crc kubenswrapper[4804]: I0128 11:24:17.914411 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:17 crc kubenswrapper[4804]: I0128 11:24:17.914411 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:17 crc kubenswrapper[4804]: I0128 11:24:17.914475 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:17 crc kubenswrapper[4804]: E0128 11:24:17.914759 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:17 crc kubenswrapper[4804]: E0128 11:24:17.915125 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:17 crc kubenswrapper[4804]: E0128 11:24:17.915313 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:17 crc kubenswrapper[4804]: E0128 11:24:17.915333 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:19 crc kubenswrapper[4804]: I0128 11:24:19.914861 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:19 crc kubenswrapper[4804]: E0128 11:24:19.915061 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:19 crc kubenswrapper[4804]: I0128 11:24:19.915115 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:19 crc kubenswrapper[4804]: I0128 11:24:19.915242 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:19 crc kubenswrapper[4804]: I0128 11:24:19.915285 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:19 crc kubenswrapper[4804]: E0128 11:24:19.915282 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:19 crc kubenswrapper[4804]: E0128 11:24:19.915499 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:19 crc kubenswrapper[4804]: E0128 11:24:19.915602 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.914829 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.914865 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.914961 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.914837 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.918871 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.918965 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.918960 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.919505 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.919836 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.919932 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 11:24:22 crc kubenswrapper[4804]: I0128 11:24:22.949099 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.348944 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.402474 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4j56"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.402963 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.406419 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m5p7p"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.407507 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.407631 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mmdfp"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.408757 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.408985 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.409597 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.409727 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.413466 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.414057 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.414409 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.414762 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.415937 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-cljd9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.416164 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.416554 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417101 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417244 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417404 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417563 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417615 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417690 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-xghdb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417811 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.418738 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.420652 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.422745 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vbjk6"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.423710 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.425140 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.426257 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.426935 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.427026 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.427471 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.427656 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.428214 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.428295 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.428588 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.428633 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.429024 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.429339 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.429592 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.429677 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.429830 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.430201 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.430234 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.430393 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.436164 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.436382 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.436622 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.436814 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.436961 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.437094 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.438974 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.439667 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.440036 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pgctg"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.440541 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.440727 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.440809 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.440949 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.442111 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.442516 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.443779 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.444163 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.444365 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.447433 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.447762 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.448014 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.448233 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.449938 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.450584 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.451388 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8hc98"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.452396 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.454499 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.454722 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.454871 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.455181 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.456405 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.456519 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.456652 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.456743 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.456849 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.457346 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.457448 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.457788 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.467905 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6kll7"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.482981 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.492791 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.493497 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.493686 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.493846 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.494013 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.494969 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.495078 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.495259 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.495338 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.495726 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.496122 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.496868 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.497282 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.497287 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.498740 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-src4s"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.498810 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.499185 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.499338 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.499364 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.499615 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.499743 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.499960 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500211 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500347 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500409 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500537 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500577 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500694 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500846 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503082 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503320 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503648 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503780 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba221b2c-59ae-4358-9328-2639e1e4e1f9-serving-cert\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503843 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9p7h\" (UniqueName: \"kubernetes.io/projected/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-kube-api-access-z9p7h\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503868 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503909 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-config\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503928 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503958 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-auth-proxy-config\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503999 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-config\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.504023 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-service-ca-bundle\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.504069 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-machine-approver-tls\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.504105 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p5tg\" (UniqueName: \"kubernetes.io/projected/ba221b2c-59ae-4358-9328-2639e1e4e1f9-kube-api-access-8p5tg\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.504217 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.505643 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.508796 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.509412 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.509989 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.513227 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.513387 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.513442 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.513727 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.513950 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.514057 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.514079 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.513400 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.517306 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.520607 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.521010 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.521247 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.521472 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.521855 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.527233 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.527691 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h44hn"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.527819 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.528031 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.528194 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.528295 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.528421 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.528521 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.528586 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.529133 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-44lsd"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.529464 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.529505 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.529696 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.530171 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.530404 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.531700 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.533584 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.539815 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4j56"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.539990 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.541968 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.553793 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.561014 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gsq9d"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.561854 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.566022 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.580260 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-slcp9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.581967 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.582180 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.582595 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.583840 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.585838 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.586532 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.586642 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.588816 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.590654 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.591620 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ml79j"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.592164 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.592272 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.592484 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.593025 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.593809 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.594834 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.595353 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.595834 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.597408 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.598605 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-slln9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.600533 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.601768 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.602631 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.604129 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mmdfp"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606264 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606419 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qf5z\" (UniqueName: \"kubernetes.io/projected/61387edd-4fc9-4cb7-8229-a6578d2d15fb-kube-api-access-8qf5z\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606484 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/46da2b10-cba3-46fa-a2f3-972499966fd3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4wpb6\" (UID: \"46da2b10-cba3-46fa-a2f3-972499966fd3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606631 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-image-import-ca\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606679 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-metrics-certs\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606721 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p5tg\" (UniqueName: \"kubernetes.io/projected/ba221b2c-59ae-4358-9328-2639e1e4e1f9-kube-api-access-8p5tg\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606744 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-config\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606765 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7k9g\" (UniqueName: \"kubernetes.io/projected/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-kube-api-access-x7k9g\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606790 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-console-config\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606807 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-client-ca\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606836 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61b65dc4-6aaf-4578-adf4-64759773196a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606863 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-encryption-config\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607093 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4jch\" (UniqueName: \"kubernetes.io/projected/e2b8b707-60c9-4138-a4d8-d218162737fe-kube-api-access-l4jch\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607116 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607142 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607160 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-ca\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607177 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-oauth-serving-cert\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607201 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-client-ca\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607222 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f90e352-ac01-40fb-bf8d-50500206f0ac-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607241 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/881a5709-4ff6-448e-ba75-caf5f7e61a5b-audit-dir\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607260 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dlk\" (UniqueName: \"kubernetes.io/projected/d56b6530-c7d7-432d-bd5e-1a07a2d94515-kube-api-access-74dlk\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607279 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-service-ca\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607301 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-etcd-serving-ca\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607323 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607338 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a74db24-5aca-48f9-889c-e37d8cdba99e-serving-cert\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607358 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65cbbd20-6185-455b-814b-7de34194ec29-audit-dir\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607377 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61387edd-4fc9-4cb7-8229-a6578d2d15fb-trusted-ca\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607413 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba221b2c-59ae-4358-9328-2639e1e4e1f9-serving-cert\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607434 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpcsj\" (UniqueName: \"kubernetes.io/projected/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-kube-api-access-dpcsj\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607458 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607479 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6mnd\" (UniqueName: \"kubernetes.io/projected/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-kube-api-access-n6mnd\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607499 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp29h\" (UniqueName: \"kubernetes.io/projected/43de728c-beeb-4fde-832b-dcf5097867e0-kube-api-access-mp29h\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607525 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-config\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607546 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-oauth-config\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607561 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-trusted-ca-bundle\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607575 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-serving-cert\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607593 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2b8b707-60c9-4138-a4d8-d218162737fe-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607613 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-key\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607683 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7b4g\" (UniqueName: \"kubernetes.io/projected/625b312d-62b0-4965-966c-3605f4d649a4-kube-api-access-q7b4g\") pod \"dns-operator-744455d44c-mmdfp\" (UID: \"625b312d-62b0-4965-966c-3605f4d649a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607703 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-audit-policies\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607726 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-config\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607747 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dce007c-8b8d-4271-bb40-7482176fc529-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607767 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-service-ca\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607790 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbhxq\" (UniqueName: \"kubernetes.io/projected/61b65dc4-6aaf-4578-adf4-64759773196a-kube-api-access-cbhxq\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607806 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/625b312d-62b0-4965-966c-3605f4d649a4-metrics-tls\") pod \"dns-operator-744455d44c-mmdfp\" (UID: \"625b312d-62b0-4965-966c-3605f4d649a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607824 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61387edd-4fc9-4cb7-8229-a6578d2d15fb-serving-cert\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607914 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f90e352-ac01-40fb-bf8d-50500206f0ac-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607936 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-proxy-tls\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607955 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf33f13a-5328-47e6-8e14-1c0a84927117-service-ca-bundle\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608145 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9k62\" (UniqueName: \"kubernetes.io/projected/0f90e352-ac01-40fb-bf8d-50500206f0ac-kube-api-access-b9k62\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608178 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/57150906-6899-4d65-b5e5-5092215695b7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608197 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-cabundle\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608245 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-auth-proxy-config\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608284 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-service-ca-bundle\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608441 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q848q\" (UniqueName: \"kubernetes.io/projected/4e425cf1-0352-47be-9c58-2bad27ccc3c1-kube-api-access-q848q\") pod \"downloads-7954f5f757-cljd9\" (UID: \"4e425cf1-0352-47be-9c58-2bad27ccc3c1\") " pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608502 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtt9h\" (UniqueName: \"kubernetes.io/projected/bf13c867-7c3e-4845-a6c8-f25700c31666-kube-api-access-dtt9h\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608532 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmtb8\" (UniqueName: \"kubernetes.io/projected/ffe68ef2-471a-42e3-a825-f90c8a5f6028-kube-api-access-kmtb8\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608556 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-serving-cert\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608585 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61b65dc4-6aaf-4578-adf4-64759773196a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608606 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-etcd-client\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608875 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48l2b\" (UniqueName: \"kubernetes.io/projected/65cbbd20-6185-455b-814b-7de34194ec29-kube-api-access-48l2b\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609137 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-config\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609155 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-config\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609182 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-stats-auth\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609211 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-default-certificate\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609272 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-machine-approver-tls\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609293 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61387edd-4fc9-4cb7-8229-a6578d2d15fb-config\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609327 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f90e352-ac01-40fb-bf8d-50500206f0ac-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609353 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab667a9d-5e0b-4faa-909e-5f778579e853-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609495 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-auth-proxy-config\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609912 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d56b6530-c7d7-432d-bd5e-1a07a2d94515-serving-cert\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609940 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffe68ef2-471a-42e3-a825-f90c8a5f6028-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609962 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfbh8\" (UniqueName: \"kubernetes.io/projected/cf33f13a-5328-47e6-8e14-1c0a84927117-kube-api-access-tfbh8\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610018 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dce007c-8b8d-4271-bb40-7482176fc529-config\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610037 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-client\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610076 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-images\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610155 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj7lz\" (UniqueName: \"kubernetes.io/projected/ab667a9d-5e0b-4faa-909e-5f778579e853-kube-api-access-lj7lz\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610287 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57150906-6899-4d65-b5e5-5092215695b7-serving-cert\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610326 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65cbbd20-6185-455b-814b-7de34194ec29-node-pullsecrets\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610362 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610381 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43de728c-beeb-4fde-832b-dcf5097867e0-config-volume\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610404 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-encryption-config\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610431 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-serving-cert\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610462 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9p7h\" (UniqueName: \"kubernetes.io/projected/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-kube-api-access-z9p7h\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610482 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwdbv\" (UniqueName: \"kubernetes.io/projected/1a74db24-5aca-48f9-889c-e37d8cdba99e-kube-api-access-fwdbv\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610501 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-audit\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610542 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610588 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dce007c-8b8d-4271-bb40-7482176fc529-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610621 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-etcd-client\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610646 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610677 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8q76\" (UniqueName: \"kubernetes.io/projected/57150906-6899-4d65-b5e5-5092215695b7-kube-api-access-w8q76\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610705 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2b8b707-60c9-4138-a4d8-d218162737fe-images\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610732 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-serving-cert\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610761 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rm6\" (UniqueName: \"kubernetes.io/projected/881a5709-4ff6-448e-ba75-caf5f7e61a5b-kube-api-access-p4rm6\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610790 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-config\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610814 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe68ef2-471a-42e3-a825-f90c8a5f6028-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610841 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610869 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2b8b707-60c9-4138-a4d8-d218162737fe-config\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610914 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64g4r\" (UniqueName: \"kubernetes.io/projected/46da2b10-cba3-46fa-a2f3-972499966fd3-kube-api-access-64g4r\") pod \"cluster-samples-operator-665b6dd947-4wpb6\" (UID: \"46da2b10-cba3-46fa-a2f3-972499966fd3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610956 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-config\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610979 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43de728c-beeb-4fde-832b-dcf5097867e0-metrics-tls\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.611000 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2pzm\" (UniqueName: \"kubernetes.io/projected/9ad95836-c587-4ca7-b5fa-f878af1019b6-kube-api-access-v2pzm\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.612183 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-config\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.612623 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-service-ca-bundle\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.612923 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.613343 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-src4s"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.614960 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.615074 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba221b2c-59ae-4358-9328-2639e1e4e1f9-serving-cert\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.618531 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pgctg"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.618676 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vbjk6"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.618766 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xghdb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.623147 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.623191 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.624549 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-machine-approver-tls\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.627129 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.629556 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.630120 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.630314 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.633016 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.635649 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.636774 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.638772 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m5p7p"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.641650 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6kll7"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.644562 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.646105 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.646525 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.648719 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.650485 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.652781 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.654495 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-44lsd"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.656516 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.658414 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vc78g"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.661839 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.665843 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.670300 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-97kr8"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.672828 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.672951 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gsq9d"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.673044 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.674525 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.679439 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vc78g"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.681537 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8hc98"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.683222 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-slln9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.685016 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.685332 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.687094 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ml79j"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.689056 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.690763 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cljd9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.692692 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-slcp9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.694458 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.696651 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.698274 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.699807 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qj7pb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.701745 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qj7pb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.701766 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.705539 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712068 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-images\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712103 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65cbbd20-6185-455b-814b-7de34194ec29-node-pullsecrets\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712123 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712144 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43de728c-beeb-4fde-832b-dcf5097867e0-config-volume\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712165 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj7lz\" (UniqueName: \"kubernetes.io/projected/ab667a9d-5e0b-4faa-909e-5f778579e853-kube-api-access-lj7lz\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712186 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57150906-6899-4d65-b5e5-5092215695b7-serving-cert\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712207 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-encryption-config\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712226 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-serving-cert\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712245 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65cbbd20-6185-455b-814b-7de34194ec29-node-pullsecrets\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712259 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwdbv\" (UniqueName: \"kubernetes.io/projected/1a74db24-5aca-48f9-889c-e37d8cdba99e-kube-api-access-fwdbv\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712338 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-audit\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712376 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712407 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dce007c-8b8d-4271-bb40-7482176fc529-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712437 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-etcd-client\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712463 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712525 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2b8b707-60c9-4138-a4d8-d218162737fe-images\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712613 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8q76\" (UniqueName: \"kubernetes.io/projected/57150906-6899-4d65-b5e5-5092215695b7-kube-api-access-w8q76\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712647 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-serving-cert\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712701 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-config\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712730 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe68ef2-471a-42e3-a825-f90c8a5f6028-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712761 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rm6\" (UniqueName: \"kubernetes.io/projected/881a5709-4ff6-448e-ba75-caf5f7e61a5b-kube-api-access-p4rm6\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712798 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712827 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2b8b707-60c9-4138-a4d8-d218162737fe-config\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712858 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64g4r\" (UniqueName: \"kubernetes.io/projected/46da2b10-cba3-46fa-a2f3-972499966fd3-kube-api-access-64g4r\") pod \"cluster-samples-operator-665b6dd947-4wpb6\" (UID: \"46da2b10-cba3-46fa-a2f3-972499966fd3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712917 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43de728c-beeb-4fde-832b-dcf5097867e0-metrics-tls\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712946 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2pzm\" (UniqueName: \"kubernetes.io/projected/9ad95836-c587-4ca7-b5fa-f878af1019b6-kube-api-access-v2pzm\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712983 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qf5z\" (UniqueName: \"kubernetes.io/projected/61387edd-4fc9-4cb7-8229-a6578d2d15fb-kube-api-access-8qf5z\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713012 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/46da2b10-cba3-46fa-a2f3-972499966fd3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4wpb6\" (UID: \"46da2b10-cba3-46fa-a2f3-972499966fd3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713046 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-image-import-ca\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713074 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-metrics-certs\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713113 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-config\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713141 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7k9g\" (UniqueName: \"kubernetes.io/projected/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-kube-api-access-x7k9g\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713173 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-client-ca\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713200 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-console-config\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713228 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61b65dc4-6aaf-4578-adf4-64759773196a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713261 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713290 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713317 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-encryption-config\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713347 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4jch\" (UniqueName: \"kubernetes.io/projected/e2b8b707-60c9-4138-a4d8-d218162737fe-kube-api-access-l4jch\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713382 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-client-ca\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713408 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-ca\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713437 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-oauth-serving-cert\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713472 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f90e352-ac01-40fb-bf8d-50500206f0ac-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713523 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/881a5709-4ff6-448e-ba75-caf5f7e61a5b-audit-dir\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713553 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74dlk\" (UniqueName: \"kubernetes.io/projected/d56b6530-c7d7-432d-bd5e-1a07a2d94515-kube-api-access-74dlk\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713586 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713613 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a74db24-5aca-48f9-889c-e37d8cdba99e-serving-cert\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713640 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-service-ca\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713668 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-etcd-serving-ca\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713701 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpcsj\" (UniqueName: \"kubernetes.io/projected/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-kube-api-access-dpcsj\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713729 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65cbbd20-6185-455b-814b-7de34194ec29-audit-dir\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713758 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61387edd-4fc9-4cb7-8229-a6578d2d15fb-trusted-ca\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713784 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713808 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6mnd\" (UniqueName: \"kubernetes.io/projected/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-kube-api-access-n6mnd\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713841 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-oauth-config\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713870 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-trusted-ca-bundle\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713918 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp29h\" (UniqueName: \"kubernetes.io/projected/43de728c-beeb-4fde-832b-dcf5097867e0-kube-api-access-mp29h\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713952 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7b4g\" (UniqueName: \"kubernetes.io/projected/625b312d-62b0-4965-966c-3605f4d649a4-kube-api-access-q7b4g\") pod \"dns-operator-744455d44c-mmdfp\" (UID: \"625b312d-62b0-4965-966c-3605f4d649a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713981 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-audit-policies\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714012 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-serving-cert\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714040 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2b8b707-60c9-4138-a4d8-d218162737fe-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714071 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-key\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714100 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dce007c-8b8d-4271-bb40-7482176fc529-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714124 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-service-ca\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714153 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-config\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714181 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f90e352-ac01-40fb-bf8d-50500206f0ac-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714209 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbhxq\" (UniqueName: \"kubernetes.io/projected/61b65dc4-6aaf-4578-adf4-64759773196a-kube-api-access-cbhxq\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714235 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/625b312d-62b0-4965-966c-3605f4d649a4-metrics-tls\") pod \"dns-operator-744455d44c-mmdfp\" (UID: \"625b312d-62b0-4965-966c-3605f4d649a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714259 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61387edd-4fc9-4cb7-8229-a6578d2d15fb-serving-cert\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714281 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-config\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714297 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-proxy-tls\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714350 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf33f13a-5328-47e6-8e14-1c0a84927117-service-ca-bundle\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714393 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9k62\" (UniqueName: \"kubernetes.io/projected/0f90e352-ac01-40fb-bf8d-50500206f0ac-kube-api-access-b9k62\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714417 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/57150906-6899-4d65-b5e5-5092215695b7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714426 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2b8b707-60c9-4138-a4d8-d218162737fe-images\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714438 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-cabundle\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714506 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtt9h\" (UniqueName: \"kubernetes.io/projected/bf13c867-7c3e-4845-a6c8-f25700c31666-kube-api-access-dtt9h\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714532 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65cbbd20-6185-455b-814b-7de34194ec29-audit-dir\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714540 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmtb8\" (UniqueName: \"kubernetes.io/projected/ffe68ef2-471a-42e3-a825-f90c8a5f6028-kube-api-access-kmtb8\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714567 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q848q\" (UniqueName: \"kubernetes.io/projected/4e425cf1-0352-47be-9c58-2bad27ccc3c1-kube-api-access-q848q\") pod \"downloads-7954f5f757-cljd9\" (UID: \"4e425cf1-0352-47be-9c58-2bad27ccc3c1\") " pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714612 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-serving-cert\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714643 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61b65dc4-6aaf-4578-adf4-64759773196a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714669 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-etcd-client\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714699 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-stats-auth\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714724 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48l2b\" (UniqueName: \"kubernetes.io/projected/65cbbd20-6185-455b-814b-7de34194ec29-kube-api-access-48l2b\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714749 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-config\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714777 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61387edd-4fc9-4cb7-8229-a6578d2d15fb-config\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714791 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-etcd-serving-ca\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714803 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-default-certificate\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714921 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f90e352-ac01-40fb-bf8d-50500206f0ac-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab667a9d-5e0b-4faa-909e-5f778579e853-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715014 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfbh8\" (UniqueName: \"kubernetes.io/projected/cf33f13a-5328-47e6-8e14-1c0a84927117-kube-api-access-tfbh8\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715043 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d56b6530-c7d7-432d-bd5e-1a07a2d94515-serving-cert\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715072 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffe68ef2-471a-42e3-a825-f90c8a5f6028-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715107 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dce007c-8b8d-4271-bb40-7482176fc529-config\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715138 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-client\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715428 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715779 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2b8b707-60c9-4138-a4d8-d218162737fe-config\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714284 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-audit\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715915 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/57150906-6899-4d65-b5e5-5092215695b7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715977 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61387edd-4fc9-4cb7-8229-a6578d2d15fb-trusted-ca\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.716427 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-config\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714773 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe68ef2-471a-42e3-a825-f90c8a5f6028-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.717445 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-client-ca\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.717837 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/881a5709-4ff6-448e-ba75-caf5f7e61a5b-audit-dir\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.717994 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718190 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-image-import-ca\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718225 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718159 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61b65dc4-6aaf-4578-adf4-64759773196a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718371 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-ca\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718480 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61387edd-4fc9-4cb7-8229-a6578d2d15fb-config\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718647 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-client-ca\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718715 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-service-ca\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.719087 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-audit-policies\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.719230 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-config\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.719290 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dce007c-8b8d-4271-bb40-7482176fc529-config\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.719501 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-serving-cert\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.719628 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.720380 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-config\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.720526 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-etcd-client\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.720568 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f90e352-ac01-40fb-bf8d-50500206f0ac-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.720968 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-serving-cert\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.721371 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-oauth-config\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.721764 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-client\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.722222 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57150906-6899-4d65-b5e5-5092215695b7-serving-cert\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.722613 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffe68ef2-471a-42e3-a825-f90c8a5f6028-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.722802 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61b65dc4-6aaf-4578-adf4-64759773196a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.722849 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/625b312d-62b0-4965-966c-3605f4d649a4-metrics-tls\") pod \"dns-operator-744455d44c-mmdfp\" (UID: \"625b312d-62b0-4965-966c-3605f4d649a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.723163 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a74db24-5aca-48f9-889c-e37d8cdba99e-serving-cert\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.723433 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-oauth-serving-cert\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.723673 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-encryption-config\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.723747 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-trusted-ca-bundle\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.723822 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-encryption-config\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.724183 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d56b6530-c7d7-432d-bd5e-1a07a2d94515-serving-cert\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.724460 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-console-config\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.724521 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f90e352-ac01-40fb-bf8d-50500206f0ac-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.724610 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-etcd-client\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.724769 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-service-ca\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.724950 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-serving-cert\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.725541 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dce007c-8b8d-4271-bb40-7482176fc529-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.725651 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.726418 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61387edd-4fc9-4cb7-8229-a6578d2d15fb-serving-cert\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.726585 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2b8b707-60c9-4138-a4d8-d218162737fe-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.727297 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-serving-cert\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.747957 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.765837 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.778019 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-metrics-certs\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.786355 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.795678 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf33f13a-5328-47e6-8e14-1c0a84927117-service-ca-bundle\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.805955 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.827629 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.841613 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-stats-auth\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.846351 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.866131 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.881013 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-default-certificate\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.885931 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.905964 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.926005 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.944847 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.965431 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.985697 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.004356 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.025609 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.045988 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.066404 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.090249 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.105681 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.126129 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.144704 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.184685 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.186315 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.211604 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.224950 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.246300 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.265539 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.285588 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.290027 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/46da2b10-cba3-46fa-a2f3-972499966fd3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4wpb6\" (UID: \"46da2b10-cba3-46fa-a2f3-972499966fd3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.305714 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.326105 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.345698 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.365276 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.385130 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.405585 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.417523 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.424890 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.445434 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.454823 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.467505 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.486766 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.505677 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.526598 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.566097 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.573619 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43de728c-beeb-4fde-832b-dcf5097867e0-config-volume\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.584031 4804 request.go:700] Waited for 1.001560987s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.586241 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.606483 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.624620 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43de728c-beeb-4fde-832b-dcf5097867e0-metrics-tls\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.626954 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.655634 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.670581 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.689754 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.707128 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.712435 4804 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.712583 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-images podName:a7c281fd-3e5a-4edc-98f7-8703c1f08aab nodeName:}" failed. No retries permitted until 2026-01-28 11:24:25.212549487 +0000 UTC m=+141.007429481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-images") pod "machine-config-operator-74547568cd-6g5ff" (UID: "a7c281fd-3e5a-4edc-98f7-8703c1f08aab") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.714570 4804 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.714617 4804 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.714640 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-proxy-tls podName:a7c281fd-3e5a-4edc-98f7-8703c1f08aab nodeName:}" failed. No retries permitted until 2026-01-28 11:24:25.214626745 +0000 UTC m=+141.009506739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-proxy-tls") pod "machine-config-operator-74547568cd-6g5ff" (UID: "a7c281fd-3e5a-4edc-98f7-8703c1f08aab") : failed to sync secret cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.714671 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-cabundle podName:9ad95836-c587-4ca7-b5fa-f878af1019b6 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:25.214651895 +0000 UTC m=+141.009531899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-cabundle") pod "service-ca-9c57cc56f-slln9" (UID: "9ad95836-c587-4ca7-b5fa-f878af1019b6") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.718873 4804 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.719006 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab667a9d-5e0b-4faa-909e-5f778579e853-package-server-manager-serving-cert podName:ab667a9d-5e0b-4faa-909e-5f778579e853 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:25.218973278 +0000 UTC m=+141.013853282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/ab667a9d-5e0b-4faa-909e-5f778579e853-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-47d82" (UID: "ab667a9d-5e0b-4faa-909e-5f778579e853") : failed to sync secret cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.719959 4804 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.720192 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-key podName:9ad95836-c587-4ca7-b5fa-f878af1019b6 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:25.220162776 +0000 UTC m=+141.015042930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-key") pod "service-ca-9c57cc56f-slln9" (UID: "9ad95836-c587-4ca7-b5fa-f878af1019b6") : failed to sync secret cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.727044 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.745917 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.765526 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.784504 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.805851 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.827310 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.852593 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.865715 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.886118 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.906409 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.926551 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.946255 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.966743 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.986034 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.005645 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.024826 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.045509 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.065108 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.086029 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.106411 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.125945 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.145743 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.166232 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.185932 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.205794 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.225466 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.241653 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-key\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.241714 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-proxy-tls\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.241743 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-cabundle\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.241793 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab667a9d-5e0b-4faa-909e-5f778579e853-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.241822 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-images\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.242535 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-images\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.243349 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-cabundle\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.246633 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.246811 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-proxy-tls\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.247593 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-key\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.251635 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab667a9d-5e0b-4faa-909e-5f778579e853-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.306768 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p5tg\" (UniqueName: \"kubernetes.io/projected/ba221b2c-59ae-4358-9328-2639e1e4e1f9-kube-api-access-8p5tg\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.320225 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9p7h\" (UniqueName: \"kubernetes.io/projected/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-kube-api-access-z9p7h\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.326317 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.344785 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.365502 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.385633 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.406154 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.426331 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.445198 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.445494 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.466462 4804 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 11:24:25 crc kubenswrapper[4804]: W0128 11:24:25.468371 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod521dbee5_5d69_4fd4_bcfc_8b2b4b404389.slice/crio-4641c91e8899a2d26f25a7ac04c61d70d134776b38ba01765ec86eecc04cbe36 WatchSource:0}: Error finding container 4641c91e8899a2d26f25a7ac04c61d70d134776b38ba01765ec86eecc04cbe36: Status 404 returned error can't find the container with id 4641c91e8899a2d26f25a7ac04c61d70d134776b38ba01765ec86eecc04cbe36 Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.481918 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.486195 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.506765 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.540638 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj7lz\" (UniqueName: \"kubernetes.io/projected/ab667a9d-5e0b-4faa-909e-5f778579e853-kube-api-access-lj7lz\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.558282 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwdbv\" (UniqueName: \"kubernetes.io/projected/1a74db24-5aca-48f9-889c-e37d8cdba99e-kube-api-access-fwdbv\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.582608 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8q76\" (UniqueName: \"kubernetes.io/projected/57150906-6899-4d65-b5e5-5092215695b7-kube-api-access-w8q76\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.603516 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dce007c-8b8d-4271-bb40-7482176fc529-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.603622 4804 request.go:700] Waited for 1.889928854s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.610758 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" event={"ID":"521dbee5-5d69-4fd4-bcfc-8b2b4b404389","Type":"ContainerStarted","Data":"4641c91e8899a2d26f25a7ac04c61d70d134776b38ba01765ec86eecc04cbe36"} Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.624046 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64g4r\" (UniqueName: \"kubernetes.io/projected/46da2b10-cba3-46fa-a2f3-972499966fd3-kube-api-access-64g4r\") pod \"cluster-samples-operator-665b6dd947-4wpb6\" (UID: \"46da2b10-cba3-46fa-a2f3-972499966fd3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.642445 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rm6\" (UniqueName: \"kubernetes.io/projected/881a5709-4ff6-448e-ba75-caf5f7e61a5b-kube-api-access-p4rm6\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.661509 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpcsj\" (UniqueName: \"kubernetes.io/projected/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-kube-api-access-dpcsj\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.664069 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.683068 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtt9h\" (UniqueName: \"kubernetes.io/projected/bf13c867-7c3e-4845-a6c8-f25700c31666-kube-api-access-dtt9h\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.695032 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.707853 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4jch\" (UniqueName: \"kubernetes.io/projected/e2b8b707-60c9-4138-a4d8-d218162737fe-kube-api-access-l4jch\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.709636 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8hc98"] Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.722617 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmtb8\" (UniqueName: \"kubernetes.io/projected/ffe68ef2-471a-42e3-a825-f90c8a5f6028-kube-api-access-kmtb8\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:25 crc kubenswrapper[4804]: W0128 11:24:25.729202 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba221b2c_59ae_4358_9328_2639e1e4e1f9.slice/crio-396b17af44e49b1540c54178c93fd38106d13f92d9832e437f0674f26b834e04 WatchSource:0}: Error finding container 396b17af44e49b1540c54178c93fd38106d13f92d9832e437f0674f26b834e04: Status 404 returned error can't find the container with id 396b17af44e49b1540c54178c93fd38106d13f92d9832e437f0674f26b834e04 Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.742739 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q848q\" (UniqueName: \"kubernetes.io/projected/4e425cf1-0352-47be-9c58-2bad27ccc3c1-kube-api-access-q848q\") pod \"downloads-7954f5f757-cljd9\" (UID: \"4e425cf1-0352-47be-9c58-2bad27ccc3c1\") " pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.756318 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.763720 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9k62\" (UniqueName: \"kubernetes.io/projected/0f90e352-ac01-40fb-bf8d-50500206f0ac-kube-api-access-b9k62\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.772276 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.784959 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qf5z\" (UniqueName: \"kubernetes.io/projected/61387edd-4fc9-4cb7-8229-a6578d2d15fb-kube-api-access-8qf5z\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.790251 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.800412 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.803742 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f90e352-ac01-40fb-bf8d-50500206f0ac-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.821062 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48l2b\" (UniqueName: \"kubernetes.io/projected/65cbbd20-6185-455b-814b-7de34194ec29-kube-api-access-48l2b\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.830196 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.844345 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6mnd\" (UniqueName: \"kubernetes.io/projected/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-kube-api-access-n6mnd\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.867560 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2pzm\" (UniqueName: \"kubernetes.io/projected/9ad95836-c587-4ca7-b5fa-f878af1019b6-kube-api-access-v2pzm\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.875612 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.881011 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfbh8\" (UniqueName: \"kubernetes.io/projected/cf33f13a-5328-47e6-8e14-1c0a84927117-kube-api-access-tfbh8\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.900459 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7k9g\" (UniqueName: \"kubernetes.io/projected/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-kube-api-access-x7k9g\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.910145 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.911200 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.925004 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp29h\" (UniqueName: \"kubernetes.io/projected/43de728c-beeb-4fde-832b-dcf5097867e0-kube-api-access-mp29h\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.928925 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.933742 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.940187 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.953178 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.953296 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7b4g\" (UniqueName: \"kubernetes.io/projected/625b312d-62b0-4965-966c-3605f4d649a4-kube-api-access-q7b4g\") pod \"dns-operator-744455d44c-mmdfp\" (UID: \"625b312d-62b0-4965-966c-3605f4d649a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.954798 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2"] Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.976097 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74dlk\" (UniqueName: \"kubernetes.io/projected/d56b6530-c7d7-432d-bd5e-1a07a2d94515-kube-api-access-74dlk\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.978765 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.983407 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.987639 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbhxq\" (UniqueName: \"kubernetes.io/projected/61b65dc4-6aaf-4578-adf4-64759773196a-kube-api-access-cbhxq\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.995686 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.002832 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.012959 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.036417 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.055268 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50a5d490-28ef-438f-b03c-6b15d30bbb1e-proxy-tls\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.057811 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cdff00-d1aa-4535-b269-b692986cd76c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.057861 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-dir\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.057914 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/436e3017-a787-4e60-97cd-7cc0cdd47a2d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.057966 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058004 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058024 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f90c0f76-ca48-4b2f-89cc-b90cc1172576-srv-cert\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058043 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058094 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058144 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ss8l\" (UniqueName: \"kubernetes.io/projected/50a5d490-28ef-438f-b03c-6b15d30bbb1e-kube-api-access-2ss8l\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058168 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-trusted-ca\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058197 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058272 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nhdq\" (UniqueName: \"kubernetes.io/projected/ae7433f6-40cb-4caf-8356-10bb93645af5-kube-api-access-4nhdq\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058294 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwljj\" (UniqueName: \"kubernetes.io/projected/456e451f-8bcc-49ad-a5e8-502c294e8518-kube-api-access-lwljj\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058333 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-certificates\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058353 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c03ebf08-d5a0-48b4-a1ca-3eec30c14490-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f822b\" (UID: \"c03ebf08-d5a0-48b4-a1ca-3eec30c14490\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058378 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f90c0f76-ca48-4b2f-89cc-b90cc1172576-profile-collector-cert\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058400 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/456e451f-8bcc-49ad-a5e8-502c294e8518-serving-cert\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058425 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx4f2\" (UniqueName: \"kubernetes.io/projected/f90c0f76-ca48-4b2f-89cc-b90cc1172576-kube-api-access-kx4f2\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058443 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058477 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.059876 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456e451f-8bcc-49ad-a5e8-502c294e8518-config\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.067917 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnf5b\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-kube-api-access-mnf5b\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.067972 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.067997 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vl2p\" (UniqueName: \"kubernetes.io/projected/5054f20f-444d-40e8-ad18-3515e1ff2638-kube-api-access-6vl2p\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.068029 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-webhook-cert\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.069768 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-tls\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.069799 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-metrics-tls\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.069819 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.069838 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.069855 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4cdff00-d1aa-4535-b269-b692986cd76c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070535 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/436e3017-a787-4e60-97cd-7cc0cdd47a2d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070570 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070590 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpl9\" (UniqueName: \"kubernetes.io/projected/349fe87d-e741-4dc4-bc78-322b541e0a3f-kube-api-access-nhpl9\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070705 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-policies\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070731 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/349fe87d-e741-4dc4-bc78-322b541e0a3f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070754 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0c2686a-d8ed-4c34-8677-4371daf94ea4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gsq9d\" (UID: \"d0c2686a-d8ed-4c34-8677-4371daf94ea4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070776 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae7433f6-40cb-4caf-8356-10bb93645af5-secret-volume\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070805 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50a5d490-28ef-438f-b03c-6b15d30bbb1e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070825 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxjxq\" (UniqueName: \"kubernetes.io/projected/9927b5d4-5460-4d78-9320-af3916443c1a-kube-api-access-pxjxq\") pod \"migrator-59844c95c7-qdn6v\" (UID: \"9927b5d4-5460-4d78-9320-af3916443c1a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070862 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070954 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-trusted-ca\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.071047 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.071097 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbc69\" (UniqueName: \"kubernetes.io/projected/d0c2686a-d8ed-4c34-8677-4371daf94ea4-kube-api-access-sbc69\") pod \"multus-admission-controller-857f4d67dd-gsq9d\" (UID: \"d0c2686a-d8ed-4c34-8677-4371daf94ea4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.071117 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-apiservice-cert\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.071142 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-bound-sa-token\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.072738 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:26.572716657 +0000 UTC m=+142.367596841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.072772 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkbb8\" (UniqueName: \"kubernetes.io/projected/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-kube-api-access-zkbb8\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.072798 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.072843 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cdff00-d1aa-4535-b269-b692986cd76c-config\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.072909 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.072973 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.073617 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae7433f6-40cb-4caf-8356-10bb93645af5-config-volume\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.073694 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppxwf\" (UniqueName: \"kubernetes.io/projected/bb959019-0f9d-4210-8410-6b3c00b02337-kube-api-access-ppxwf\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.073780 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkhjz\" (UniqueName: \"kubernetes.io/projected/c03ebf08-d5a0-48b4-a1ca-3eec30c14490-kube-api-access-hkhjz\") pod \"control-plane-machine-set-operator-78cbb6b69f-f822b\" (UID: \"c03ebf08-d5a0-48b4-a1ca-3eec30c14490\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.073833 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-tmpfs\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.074014 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7pt\" (UniqueName: \"kubernetes.io/projected/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-kube-api-access-fz7pt\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.074155 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/349fe87d-e741-4dc4-bc78-322b541e0a3f-srv-cert\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.074178 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.074245 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.160860 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.174954 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.175534 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.175755 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx4f2\" (UniqueName: \"kubernetes.io/projected/f90c0f76-ca48-4b2f-89cc-b90cc1172576-kube-api-access-kx4f2\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.175790 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176100 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176131 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456e451f-8bcc-49ad-a5e8-502c294e8518-config\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176158 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-certs\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176189 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176215 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnf5b\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-kube-api-access-mnf5b\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176238 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vl2p\" (UniqueName: \"kubernetes.io/projected/5054f20f-444d-40e8-ad18-3515e1ff2638-kube-api-access-6vl2p\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176260 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-webhook-cert\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176279 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-tls\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176297 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-metrics-tls\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176317 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176336 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176353 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4cdff00-d1aa-4535-b269-b692986cd76c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176404 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/436e3017-a787-4e60-97cd-7cc0cdd47a2d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176428 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176444 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhpl9\" (UniqueName: \"kubernetes.io/projected/349fe87d-e741-4dc4-bc78-322b541e0a3f-kube-api-access-nhpl9\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176483 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lcpg\" (UniqueName: \"kubernetes.io/projected/113634df-0b68-4670-8c3d-8d227c626095-kube-api-access-6lcpg\") pod \"ingress-canary-vc78g\" (UID: \"113634df-0b68-4670-8c3d-8d227c626095\") " pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176512 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/349fe87d-e741-4dc4-bc78-322b541e0a3f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176540 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-policies\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176564 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b9vk\" (UniqueName: \"kubernetes.io/projected/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-kube-api-access-8b9vk\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176586 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae7433f6-40cb-4caf-8356-10bb93645af5-secret-volume\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.176661 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:26.676631683 +0000 UTC m=+142.471511667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176692 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0c2686a-d8ed-4c34-8677-4371daf94ea4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gsq9d\" (UID: \"d0c2686a-d8ed-4c34-8677-4371daf94ea4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176778 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50a5d490-28ef-438f-b03c-6b15d30bbb1e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176806 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxjxq\" (UniqueName: \"kubernetes.io/projected/9927b5d4-5460-4d78-9320-af3916443c1a-kube-api-access-pxjxq\") pod \"migrator-59844c95c7-qdn6v\" (UID: \"9927b5d4-5460-4d78-9320-af3916443c1a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176849 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176894 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-node-bootstrap-token\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176922 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-trusted-ca\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176947 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbc69\" (UniqueName: \"kubernetes.io/projected/d0c2686a-d8ed-4c34-8677-4371daf94ea4-kube-api-access-sbc69\") pod \"multus-admission-controller-857f4d67dd-gsq9d\" (UID: \"d0c2686a-d8ed-4c34-8677-4371daf94ea4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182367 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182413 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-apiservice-cert\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182443 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-bound-sa-token\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.178820 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182498 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkbb8\" (UniqueName: \"kubernetes.io/projected/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-kube-api-access-zkbb8\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182531 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182555 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cdff00-d1aa-4535-b269-b692986cd76c-config\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182590 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182632 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae7433f6-40cb-4caf-8356-10bb93645af5-config-volume\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182658 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182689 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppxwf\" (UniqueName: \"kubernetes.io/projected/bb959019-0f9d-4210-8410-6b3c00b02337-kube-api-access-ppxwf\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182715 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkhjz\" (UniqueName: \"kubernetes.io/projected/c03ebf08-d5a0-48b4-a1ca-3eec30c14490-kube-api-access-hkhjz\") pod \"control-plane-machine-set-operator-78cbb6b69f-f822b\" (UID: \"c03ebf08-d5a0-48b4-a1ca-3eec30c14490\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182741 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-tmpfs\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.181241 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-policies\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182769 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz7pt\" (UniqueName: \"kubernetes.io/projected/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-kube-api-access-fz7pt\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.179854 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50a5d490-28ef-438f-b03c-6b15d30bbb1e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.183011 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-registration-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.179543 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456e451f-8bcc-49ad-a5e8-502c294e8518-config\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.183085 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkz5n\" (UniqueName: \"kubernetes.io/projected/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-kube-api-access-pkz5n\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.181521 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-trusted-ca\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.177481 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.184574 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/349fe87d-e741-4dc4-bc78-322b541e0a3f-srv-cert\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.184773 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.185219 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.185710 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50a5d490-28ef-438f-b03c-6b15d30bbb1e-proxy-tls\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.185786 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cdff00-d1aa-4535-b269-b692986cd76c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.185817 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/436e3017-a787-4e60-97cd-7cc0cdd47a2d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.185839 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-dir\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.185917 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/113634df-0b68-4670-8c3d-8d227c626095-cert\") pod \"ingress-canary-vc78g\" (UID: \"113634df-0b68-4670-8c3d-8d227c626095\") " pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186201 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-socket-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186254 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186280 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186320 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f90c0f76-ca48-4b2f-89cc-b90cc1172576-srv-cert\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186345 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186366 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-plugins-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186445 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186506 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ss8l\" (UniqueName: \"kubernetes.io/projected/50a5d490-28ef-438f-b03c-6b15d30bbb1e-kube-api-access-2ss8l\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186526 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-mountpoint-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186581 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-trusted-ca\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186602 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186643 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nhdq\" (UniqueName: \"kubernetes.io/projected/ae7433f6-40cb-4caf-8356-10bb93645af5-kube-api-access-4nhdq\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186667 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwljj\" (UniqueName: \"kubernetes.io/projected/456e451f-8bcc-49ad-a5e8-502c294e8518-kube-api-access-lwljj\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186684 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-csi-data-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186756 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-certificates\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186779 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c03ebf08-d5a0-48b4-a1ca-3eec30c14490-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f822b\" (UID: \"c03ebf08-d5a0-48b4-a1ca-3eec30c14490\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186821 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f90c0f76-ca48-4b2f-89cc-b90cc1172576-profile-collector-cert\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186853 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/456e451f-8bcc-49ad-a5e8-502c294e8518-serving-cert\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.187758 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-tmpfs\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.188910 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae7433f6-40cb-4caf-8356-10bb93645af5-config-volume\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.190736 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-webhook-cert\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.191030 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:26.691005794 +0000 UTC m=+142.485885778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.193153 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.192457 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.192895 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-metrics-tls\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.191787 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-dir\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.192994 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cdff00-d1aa-4535-b269-b692986cd76c-config\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.193565 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.193715 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/436e3017-a787-4e60-97cd-7cc0cdd47a2d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.194771 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/436e3017-a787-4e60-97cd-7cc0cdd47a2d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.196081 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/349fe87d-e741-4dc4-bc78-322b541e0a3f-srv-cert\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.196158 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-certificates\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.197001 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-trusted-ca\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.201105 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.201191 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f90c0f76-ca48-4b2f-89cc-b90cc1172576-profile-collector-cert\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.201413 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.201457 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.201761 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/349fe87d-e741-4dc4-bc78-322b541e0a3f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.202376 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae7433f6-40cb-4caf-8356-10bb93645af5-secret-volume\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.202995 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.203734 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0c2686a-d8ed-4c34-8677-4371daf94ea4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gsq9d\" (UID: \"d0c2686a-d8ed-4c34-8677-4371daf94ea4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.205536 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50a5d490-28ef-438f-b03c-6b15d30bbb1e-proxy-tls\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.206234 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.206304 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.206446 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f90c0f76-ca48-4b2f-89cc-b90cc1172576-srv-cert\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.206793 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-tls\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.209605 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.212859 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c03ebf08-d5a0-48b4-a1ca-3eec30c14490-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f822b\" (UID: \"c03ebf08-d5a0-48b4-a1ca-3eec30c14490\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.214808 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82"] Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.216035 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-apiservice-cert\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.216254 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.216345 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cdff00-d1aa-4535-b269-b692986cd76c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.216473 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/456e451f-8bcc-49ad-a5e8-502c294e8518-serving-cert\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.217449 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.218266 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.226333 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx4f2\" (UniqueName: \"kubernetes.io/projected/f90c0f76-ca48-4b2f-89cc-b90cc1172576-kube-api-access-kx4f2\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: W0128 11:24:26.232034 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf33f13a_5328_47e6_8e14_1c0a84927117.slice/crio-b107089832d3e98f6e3468e44f02410eb5128b364a347fae1eab151f2eccb0b5 WatchSource:0}: Error finding container b107089832d3e98f6e3468e44f02410eb5128b364a347fae1eab151f2eccb0b5: Status 404 returned error can't find the container with id b107089832d3e98f6e3468e44f02410eb5128b364a347fae1eab151f2eccb0b5 Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.261685 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4cdff00-d1aa-4535-b269-b692986cd76c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.271493 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vl2p\" (UniqueName: \"kubernetes.io/projected/5054f20f-444d-40e8-ad18-3515e1ff2638-kube-api-access-6vl2p\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292086 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292450 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/113634df-0b68-4670-8c3d-8d227c626095-cert\") pod \"ingress-canary-vc78g\" (UID: \"113634df-0b68-4670-8c3d-8d227c626095\") " pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292507 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-socket-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292535 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-plugins-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292602 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-mountpoint-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292638 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-csi-data-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292675 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-certs\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292732 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lcpg\" (UniqueName: \"kubernetes.io/projected/113634df-0b68-4670-8c3d-8d227c626095-kube-api-access-6lcpg\") pod \"ingress-canary-vc78g\" (UID: \"113634df-0b68-4670-8c3d-8d227c626095\") " pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292752 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b9vk\" (UniqueName: \"kubernetes.io/projected/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-kube-api-access-8b9vk\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292782 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-node-bootstrap-token\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292861 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-registration-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.295084 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-mountpoint-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.295228 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:26.795187289 +0000 UTC m=+142.590067273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.295448 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkz5n\" (UniqueName: \"kubernetes.io/projected/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-kube-api-access-pkz5n\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.295695 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-csi-data-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.297800 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-socket-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.298061 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-plugins-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.298189 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-registration-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.308426 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxjxq\" (UniqueName: \"kubernetes.io/projected/9927b5d4-5460-4d78-9320-af3916443c1a-kube-api-access-pxjxq\") pod \"migrator-59844c95c7-qdn6v\" (UID: \"9927b5d4-5460-4d78-9320-af3916443c1a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.309499 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-node-bootstrap-token\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.310756 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/113634df-0b68-4670-8c3d-8d227c626095-cert\") pod \"ingress-canary-vc78g\" (UID: \"113634df-0b68-4670-8c3d-8d227c626095\") " pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.312611 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.318197 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-certs\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.327347 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.336672 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhpl9\" (UniqueName: \"kubernetes.io/projected/349fe87d-e741-4dc4-bc78-322b541e0a3f-kube-api-access-nhpl9\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.345610 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.350509 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnf5b\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-kube-api-access-mnf5b\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.374428 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbc69\" (UniqueName: \"kubernetes.io/projected/d0c2686a-d8ed-4c34-8677-4371daf94ea4-kube-api-access-sbc69\") pod \"multus-admission-controller-857f4d67dd-gsq9d\" (UID: \"d0c2686a-d8ed-4c34-8677-4371daf94ea4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.389192 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz7pt\" (UniqueName: \"kubernetes.io/projected/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-kube-api-access-fz7pt\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.398966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.399376 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:26.899362993 +0000 UTC m=+142.694242977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.403443 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-bound-sa-token\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.420158 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppxwf\" (UniqueName: \"kubernetes.io/projected/bb959019-0f9d-4210-8410-6b3c00b02337-kube-api-access-ppxwf\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.465916 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.467046 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.477580 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkhjz\" (UniqueName: \"kubernetes.io/projected/c03ebf08-d5a0-48b4-a1ca-3eec30c14490-kube-api-access-hkhjz\") pod \"control-plane-machine-set-operator-78cbb6b69f-f822b\" (UID: \"c03ebf08-d5a0-48b4-a1ca-3eec30c14490\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.499237 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.500710 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.506325 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkbb8\" (UniqueName: \"kubernetes.io/projected/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-kube-api-access-zkbb8\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.507373 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.507608 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.00758708 +0000 UTC m=+142.802467064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.507988 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.509749 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.00972833 +0000 UTC m=+142.804608314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.511315 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ss8l\" (UniqueName: \"kubernetes.io/projected/50a5d490-28ef-438f-b03c-6b15d30bbb1e-kube-api-access-2ss8l\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.524223 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.531036 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nhdq\" (UniqueName: \"kubernetes.io/projected/ae7433f6-40cb-4caf-8356-10bb93645af5-kube-api-access-4nhdq\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.545786 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.549105 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwljj\" (UniqueName: \"kubernetes.io/projected/456e451f-8bcc-49ad-a5e8-502c294e8518-kube-api-access-lwljj\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.567160 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pgctg"] Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.567752 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.572959 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.587647 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.592666 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkz5n\" (UniqueName: \"kubernetes.io/projected/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-kube-api-access-pkz5n\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.594577 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98"] Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.604282 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.606392 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b9vk\" (UniqueName: \"kubernetes.io/projected/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-kube-api-access-8b9vk\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.610362 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.610618 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.611168 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.111144414 +0000 UTC m=+142.906024398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.611427 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.611824 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.111813236 +0000 UTC m=+142.906693220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.618424 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.656413 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" event={"ID":"ab667a9d-5e0b-4faa-909e-5f778579e853","Type":"ContainerStarted","Data":"c34e549ef82e1de0c5ed647540a9639d2333b3334a982a8e78d4fa3ecb19f65f"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.656631 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" event={"ID":"ab667a9d-5e0b-4faa-909e-5f778579e853","Type":"ContainerStarted","Data":"4c605d8dc690a359bbaa89ddb6dec96698b0e5a7bcd497622f21cfa5e3fe6d5a"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.657105 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lcpg\" (UniqueName: \"kubernetes.io/projected/113634df-0b68-4670-8c3d-8d227c626095-kube-api-access-6lcpg\") pod \"ingress-canary-vc78g\" (UID: \"113634df-0b68-4670-8c3d-8d227c626095\") " pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.665069 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.676221 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6kll7"] Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.690520 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.712427 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.712971 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.2129486 +0000 UTC m=+143.007828584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.737115 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" event={"ID":"521dbee5-5d69-4fd4-bcfc-8b2b4b404389","Type":"ContainerStarted","Data":"8e98121d156a21d12f1db855471e30841edd1c80eab6747236be7c1d067475b0"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.737150 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" event={"ID":"521dbee5-5d69-4fd4-bcfc-8b2b4b404389","Type":"ContainerStarted","Data":"838a4034ac2c2f3026532b6c5f77a741f55bd0351a97d102fe712ff314e6e9a6"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.753532 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h44hn" event={"ID":"cf33f13a-5328-47e6-8e14-1c0a84927117","Type":"ContainerStarted","Data":"9f59e2244a434602fa38d4428dfb3cec1341b3c14baac404a3dd53809eb264d8"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.753637 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h44hn" event={"ID":"cf33f13a-5328-47e6-8e14-1c0a84927117","Type":"ContainerStarted","Data":"b107089832d3e98f6e3468e44f02410eb5128b364a347fae1eab151f2eccb0b5"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.754679 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.762538 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4j56"] Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.773018 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq"] Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.781639 4804 generic.go:334] "Generic (PLEG): container finished" podID="57150906-6899-4d65-b5e5-5092215695b7" containerID="a3bd50a4ac59b751fc34755a18bd2719b34f77d96924a2c6bf5778afe5316be5" exitCode=0 Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.781766 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" event={"ID":"57150906-6899-4d65-b5e5-5092215695b7","Type":"ContainerDied","Data":"a3bd50a4ac59b751fc34755a18bd2719b34f77d96924a2c6bf5778afe5316be5"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.781858 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" event={"ID":"57150906-6899-4d65-b5e5-5092215695b7","Type":"ContainerStarted","Data":"1b698278f4e842310cd35893a303cee77e52661c63b15aac54280476e91040c1"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.795611 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" event={"ID":"ba221b2c-59ae-4358-9328-2639e1e4e1f9","Type":"ContainerStarted","Data":"8e34ee944e370d7953eefcf3fbfc0b2b8bf90e9f87c76853c0a8a0b91a834407"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.795694 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" event={"ID":"ba221b2c-59ae-4358-9328-2639e1e4e1f9","Type":"ContainerStarted","Data":"396b17af44e49b1540c54178c93fd38106d13f92d9832e437f0674f26b834e04"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.814245 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.814713 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.314697736 +0000 UTC m=+143.109577720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: W0128 11:24:26.812851 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61387edd_4fc9_4cb7_8229_a6578d2d15fb.slice/crio-4fd92ad5b38c409bbbb9f0a38c3e7e188bb19c1833a427b53349f676e9af8a2e WatchSource:0}: Error finding container 4fd92ad5b38c409bbbb9f0a38c3e7e188bb19c1833a427b53349f676e9af8a2e: Status 404 returned error can't find the container with id 4fd92ad5b38c409bbbb9f0a38c3e7e188bb19c1833a427b53349f676e9af8a2e Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.918358 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.921655 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.421632851 +0000 UTC m=+143.216512835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.954012 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.020852 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.021217 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.521204494 +0000 UTC m=+143.316084478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.122408 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.123231 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.623199067 +0000 UTC m=+143.418079051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.123589 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.124042 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.624034235 +0000 UTC m=+143.418914219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.157632 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.177917 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.177981 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.183539 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:27 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:27 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:27 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.183609 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.192768 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m5p7p"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.214974 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-slcp9"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.218513 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cljd9"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.225156 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.225600 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.725578172 +0000 UTC m=+143.520458156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.232032 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-slln9"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.240754 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h44hn" podStartSLOduration=121.240727039 podStartE2EDuration="2m1.240727039s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:27.237340038 +0000 UTC m=+143.032220022" watchObservedRunningTime="2026-01-28 11:24:27.240727039 +0000 UTC m=+143.035607013" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.241697 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.242510 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mmdfp"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.244590 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xghdb"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.249194 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.250653 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vbjk6"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.272100 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.279996 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.289406 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.291847 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" podStartSLOduration=121.291823724 podStartE2EDuration="2m1.291823724s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:27.281456534 +0000 UTC m=+143.076336528" watchObservedRunningTime="2026-01-28 11:24:27.291823724 +0000 UTC m=+143.086703708" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.356016 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.359932 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.859901155 +0000 UTC m=+143.654781169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.458922 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.460014 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.959979936 +0000 UTC m=+143.754859920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.537772 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-44lsd"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.543259 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.549641 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.561436 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.562302 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.562698 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.062680551 +0000 UTC m=+143.857560535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.570272 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v"] Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.572276 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd56b6530_c7d7_432d_bd5e_1a07a2d94515.slice/crio-5fd5c872cb044c160a4ff18aa2a4c6121bf64074f3146f4655562dbf6f1c2b4e WatchSource:0}: Error finding container 5fd5c872cb044c160a4ff18aa2a4c6121bf64074f3146f4655562dbf6f1c2b4e: Status 404 returned error can't find the container with id 5fd5c872cb044c160a4ff18aa2a4c6121bf64074f3146f4655562dbf6f1c2b4e Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.582811 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.600660 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.617390 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm"] Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.658304 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5054f20f_444d_40e8_ad18_3515e1ff2638.slice/crio-3604a1ed363f990559841eb45c533c06695cdf71dd2d767f1fae173b03ac7671 WatchSource:0}: Error finding container 3604a1ed363f990559841eb45c533c06695cdf71dd2d767f1fae173b03ac7671: Status 404 returned error can't find the container with id 3604a1ed363f990559841eb45c533c06695cdf71dd2d767f1fae173b03ac7671 Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.664277 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.664668 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.164645404 +0000 UTC m=+143.959525388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.676484 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.685316 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7"] Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.689286 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9927b5d4_5460_4d78_9320_af3916443c1a.slice/crio-02704172dc594609b80c9268bfb11b50001e91da6865bacc890399924f19a673 WatchSource:0}: Error finding container 02704172dc594609b80c9268bfb11b50001e91da6865bacc890399924f19a673: Status 404 returned error can't find the container with id 02704172dc594609b80c9268bfb11b50001e91da6865bacc890399924f19a673 Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.694464 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ml79j"] Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.702781 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc03ebf08_d5a0_48b4_a1ca_3eec30c14490.slice/crio-7a15e2abcf69e999422174e26539765b911bdfe5dd9b584fca04291a550b3900 WatchSource:0}: Error finding container 7a15e2abcf69e999422174e26539765b911bdfe5dd9b584fca04291a550b3900: Status 404 returned error can't find the container with id 7a15e2abcf69e999422174e26539765b911bdfe5dd9b584fca04291a550b3900 Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.716128 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5"] Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.734456 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod456e451f_8bcc_49ad_a5e8_502c294e8518.slice/crio-14de5827055986a9f83a91574ec8f25b8df7aa6ccb7b9855e7f209eafd33035c WatchSource:0}: Error finding container 14de5827055986a9f83a91574ec8f25b8df7aa6ccb7b9855e7f209eafd33035c: Status 404 returned error can't find the container with id 14de5827055986a9f83a91574ec8f25b8df7aa6ccb7b9855e7f209eafd33035c Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.741712 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a69cec6_e1b7_4e4d_88f7_de85e459ed7b.slice/crio-b518443f38ffb9f84b3dd2dfaab36dbbe5e370b2ef802d93d58b808e24eab16b WatchSource:0}: Error finding container b518443f38ffb9f84b3dd2dfaab36dbbe5e370b2ef802d93d58b808e24eab16b: Status 404 returned error can't find the container with id b518443f38ffb9f84b3dd2dfaab36dbbe5e370b2ef802d93d58b808e24eab16b Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.752767 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50a5d490_28ef_438f_b03c_6b15d30bbb1e.slice/crio-29e66fe3c6299793226f4367e79e47af740c2de1f533d8d285fca540e08158f0 WatchSource:0}: Error finding container 29e66fe3c6299793226f4367e79e47af740c2de1f533d8d285fca540e08158f0: Status 404 returned error can't find the container with id 29e66fe3c6299793226f4367e79e47af740c2de1f533d8d285fca540e08158f0 Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.753527 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gsq9d"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.765734 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.766709 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.266694228 +0000 UTC m=+144.061574212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.815070 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-97kr8" event={"ID":"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb","Type":"ContainerStarted","Data":"9b139cb8da56097cf65f6612c2d1178228173d0d01df5245c5bd587c8df7da2b"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.815134 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-97kr8" event={"ID":"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb","Type":"ContainerStarted","Data":"fb40ab307ce4a5caf1b231b75a7adfe6962821c8f2dc60b80dad372b90c37003"} Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.818795 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0c2686a_d8ed_4c34_8677_4371daf94ea4.slice/crio-fea7ef4cc668293eb4bf8b237f28d1966937276987d2224f86cc726aa1dafb22 WatchSource:0}: Error finding container fea7ef4cc668293eb4bf8b237f28d1966937276987d2224f86cc726aa1dafb22: Status 404 returned error can't find the container with id fea7ef4cc668293eb4bf8b237f28d1966937276987d2224f86cc726aa1dafb22 Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.820272 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cljd9" event={"ID":"4e425cf1-0352-47be-9c58-2bad27ccc3c1","Type":"ContainerStarted","Data":"3abf8095650166015744b0692bfb8d755bffd67750afd5ad77a5855bdeef339a"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.820331 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cljd9" event={"ID":"4e425cf1-0352-47be-9c58-2bad27ccc3c1","Type":"ContainerStarted","Data":"6b9f1fb982c5257d07339926fb7a00a49338abd8cff67e05976b575df897f9c8"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.820463 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.822335 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" event={"ID":"ae7433f6-40cb-4caf-8356-10bb93645af5","Type":"ContainerStarted","Data":"c2ace65eb04ab5ff8b961ebdb9574c3959291d26b7237bb5bd982c03d8d46b22"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.826267 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" event={"ID":"ffe68ef2-471a-42e3-a825-f90c8a5f6028","Type":"ContainerStarted","Data":"4fb80761b670b818cbfc00dc4729aa0a1753e14f72dc1531e8c266e78f899b4f"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.826313 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" event={"ID":"ffe68ef2-471a-42e3-a825-f90c8a5f6028","Type":"ContainerStarted","Data":"634af5bd10abf62a71d59f87a2541399c5773f0d3f1a7a24e748550c9703eb7f"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.834615 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" event={"ID":"c4cdff00-d1aa-4535-b269-b692986cd76c","Type":"ContainerStarted","Data":"f32d30fb5d7e1b78710b5ddfc122e05c1e81e8eb099800479d1eeb44526b6665"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.835569 4804 patch_prober.go:28] interesting pod/downloads-7954f5f757-cljd9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.835633 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cljd9" podUID="4e425cf1-0352-47be-9c58-2bad27ccc3c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.840305 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" event={"ID":"456e451f-8bcc-49ad-a5e8-502c294e8518","Type":"ContainerStarted","Data":"14de5827055986a9f83a91574ec8f25b8df7aa6ccb7b9855e7f209eafd33035c"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.880430 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vc78g"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.882726 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.885143 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.38511883 +0000 UTC m=+144.179998814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.891153 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" event={"ID":"f90c0f76-ca48-4b2f-89cc-b90cc1172576","Type":"ContainerStarted","Data":"624b2d06e831bacfc999ebb84f34b3e807fa2e1f141266c26470178331680693"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.903487 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" event={"ID":"57150906-6899-4d65-b5e5-5092215695b7","Type":"ContainerStarted","Data":"7d2183592100c3b755ea4b9a67254d7368c8e8a78e571f1c24b874621cdac80a"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.942840 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" event={"ID":"349fe87d-e741-4dc4-bc78-322b541e0a3f","Type":"ContainerStarted","Data":"21835e1f17594c29f6778f18a9d2cfa993b94d38a983da6a99362ca3804e3f5f"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.944820 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" event={"ID":"9927b5d4-5460-4d78-9320-af3916443c1a","Type":"ContainerStarted","Data":"02704172dc594609b80c9268bfb11b50001e91da6865bacc890399924f19a673"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.948680 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qj7pb"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.949244 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" event={"ID":"61b65dc4-6aaf-4578-adf4-64759773196a","Type":"ContainerStarted","Data":"6fd618208dcc10b2db5a27d6c2d88f9f7014f82e2d583cd0b03a3ec78020706a"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.953060 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6kll7" event={"ID":"61387edd-4fc9-4cb7-8229-a6578d2d15fb","Type":"ContainerStarted","Data":"83ab0c912a8a7d32eb665a95e871f12c36c294f768dbb5fcd25c4b56ddbd8f98"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.953120 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6kll7" event={"ID":"61387edd-4fc9-4cb7-8229-a6578d2d15fb","Type":"ContainerStarted","Data":"4fd92ad5b38c409bbbb9f0a38c3e7e188bb19c1833a427b53349f676e9af8a2e"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.953699 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.953998 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.957008 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" event={"ID":"bb959019-0f9d-4210-8410-6b3c00b02337","Type":"ContainerStarted","Data":"da180074ac3e1b702af197f95701d1cff294f3e8895503fdbfbde3d61d0ef87e"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.960130 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" event={"ID":"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d","Type":"ContainerStarted","Data":"5602dfca664869f18078de3f18533cb7e4f55f023e1dfd1ba6fe6bcc81da1296"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.961272 4804 patch_prober.go:28] interesting pod/console-operator-58897d9998-6kll7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.961339 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6kll7" podUID="61387edd-4fc9-4cb7-8229-a6578d2d15fb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.968220 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" event={"ID":"0f90e352-ac01-40fb-bf8d-50500206f0ac","Type":"ContainerStarted","Data":"2fc249a3bb99b94948a6de7cc92b28d11421473128e5d99fc9e7af52b791c22f"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.970692 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" event={"ID":"625b312d-62b0-4965-966c-3605f4d649a4","Type":"ContainerStarted","Data":"040928f7f7783681cac1ca54191f2137bf69e3b7b5fd0b5aa33ffccd11b93cfe"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.985380 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.987323 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.487300678 +0000 UTC m=+144.282180662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.001333 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" event={"ID":"50a5d490-28ef-438f-b03c-6b15d30bbb1e","Type":"ContainerStarted","Data":"29e66fe3c6299793226f4367e79e47af740c2de1f533d8d285fca540e08158f0"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.015277 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" event={"ID":"881a5709-4ff6-448e-ba75-caf5f7e61a5b","Type":"ContainerStarted","Data":"5cc033f5b7fdb4ed4c13410df4b7e5bac38b23c6c7b99e1ae758945871f6d9b7"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.018285 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-slln9" event={"ID":"9ad95836-c587-4ca7-b5fa-f878af1019b6","Type":"ContainerStarted","Data":"4797ba443515c885c8f4072498faa67fe63c348f02456ef6e8f5d9f118858289"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.028995 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" event={"ID":"2dce007c-8b8d-4271-bb40-7482176fc529","Type":"ContainerStarted","Data":"40b8ed310457bc0f00986ac6504595e3357a1c9be972ccf980b80a93508a837e"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.031248 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" event={"ID":"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b","Type":"ContainerStarted","Data":"b518443f38ffb9f84b3dd2dfaab36dbbe5e370b2ef802d93d58b808e24eab16b"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.033169 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-slcp9" event={"ID":"43de728c-beeb-4fde-832b-dcf5097867e0","Type":"ContainerStarted","Data":"0e322a333fb5eefa0f687550b5b99556748316711d7cbbeb522e3842b7256871"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.035747 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" event={"ID":"65cbbd20-6185-455b-814b-7de34194ec29","Type":"ContainerStarted","Data":"883b034a2889463138fade7b419ea017c2bfce371979299cd8f7a797a2683e63"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.041393 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" podStartSLOduration=124.041372851 podStartE2EDuration="2m4.041372851s" podCreationTimestamp="2026-01-28 11:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.039897393 +0000 UTC m=+143.834777377" watchObservedRunningTime="2026-01-28 11:24:28.041372851 +0000 UTC m=+143.836252835" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.051204 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" event={"ID":"ab667a9d-5e0b-4faa-909e-5f778579e853","Type":"ContainerStarted","Data":"2c2bb69c0f630959a89190b34a64007b18f8c0bda2b19db0560598c1baaa8b7a"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.051719 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.054298 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" event={"ID":"e2b8b707-60c9-4138-a4d8-d218162737fe","Type":"ContainerStarted","Data":"64c5dc9b1e42c4e95788e781597b347ac0700676170c2c076bffecca9838cccf"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.064813 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" event={"ID":"c802cb06-d5ee-489c-aa2d-4dee5f3f2557","Type":"ContainerStarted","Data":"fe1f9a02caf21153db1bd567b5a8dd3b8d2a57a944d9bcf293a808b20081940f"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.064895 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" event={"ID":"c802cb06-d5ee-489c-aa2d-4dee5f3f2557","Type":"ContainerStarted","Data":"cffe7deccba04a98fba8c431ccb78fb720efb5536fc80dba3180146f85a85987"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.065397 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.074721 4804 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-z4j56 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.074809 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.079363 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" event={"ID":"1a74db24-5aca-48f9-889c-e37d8cdba99e","Type":"ContainerStarted","Data":"c1faa4ff4a670d1fa673dfb9f4d02e027395370b2fbf081ec587f48043939450"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.084705 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" event={"ID":"d56b6530-c7d7-432d-bd5e-1a07a2d94515","Type":"ContainerStarted","Data":"5fd5c872cb044c160a4ff18aa2a4c6121bf64074f3146f4655562dbf6f1c2b4e"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.086648 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.088683 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.58865463 +0000 UTC m=+144.383534794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.093613 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xghdb" event={"ID":"bf13c867-7c3e-4845-a6c8-f25700c31666","Type":"ContainerStarted","Data":"39c6be7d2c6b604e29ab674e70547e5294e550d001aed4bfc7286a6d8fd167c8"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.115680 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" event={"ID":"a7c281fd-3e5a-4edc-98f7-8703c1f08aab","Type":"ContainerStarted","Data":"955e354432e1521f0faa580d1e71e4f71a73c5784df07b9bdb25cf52569e40c5"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.119368 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" event={"ID":"46da2b10-cba3-46fa-a2f3-972499966fd3","Type":"ContainerStarted","Data":"f8edde795f44d3c24d4992155087778dbf2413f6d87dca7b471cbe639efa2ffc"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.122760 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" event={"ID":"c03ebf08-d5a0-48b4-a1ca-3eec30c14490","Type":"ContainerStarted","Data":"7a15e2abcf69e999422174e26539765b911bdfe5dd9b584fca04291a550b3900"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.136719 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" event={"ID":"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3","Type":"ContainerStarted","Data":"2230dd1bd5b3c583e5a4b7a4c92a1eb57c941e41b51a2b6497c63e928058e888"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.140706 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" event={"ID":"5054f20f-444d-40e8-ad18-3515e1ff2638","Type":"ContainerStarted","Data":"3604a1ed363f990559841eb45c533c06695cdf71dd2d767f1fae173b03ac7671"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.159664 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" podStartSLOduration=123.159638127 podStartE2EDuration="2m3.159638127s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.158439727 +0000 UTC m=+143.953319711" watchObservedRunningTime="2026-01-28 11:24:28.159638127 +0000 UTC m=+143.954518111" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.190345 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.190815 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.690800299 +0000 UTC m=+144.485680283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.201321 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:28 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:28 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:28 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.201380 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.216511 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6kll7" podStartSLOduration=123.216458959 podStartE2EDuration="2m3.216458959s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.206279995 +0000 UTC m=+144.001159979" watchObservedRunningTime="2026-01-28 11:24:28.216458959 +0000 UTC m=+144.011338943" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.248314 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-cljd9" podStartSLOduration=123.248283402 podStartE2EDuration="2m3.248283402s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.23783719 +0000 UTC m=+144.032717184" watchObservedRunningTime="2026-01-28 11:24:28.248283402 +0000 UTC m=+144.043163386" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.286488 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" podStartSLOduration=123.286460633 podStartE2EDuration="2m3.286460633s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.283786885 +0000 UTC m=+144.078666870" watchObservedRunningTime="2026-01-28 11:24:28.286460633 +0000 UTC m=+144.081340617" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.299181 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.301196 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.801162035 +0000 UTC m=+144.596042019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.359719 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" podStartSLOduration=122.359698134 podStartE2EDuration="2m2.359698134s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.317739088 +0000 UTC m=+144.112619072" watchObservedRunningTime="2026-01-28 11:24:28.359698134 +0000 UTC m=+144.154578118" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.360846 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" podStartSLOduration=123.360838021 podStartE2EDuration="2m3.360838021s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.359527968 +0000 UTC m=+144.154407952" watchObservedRunningTime="2026-01-28 11:24:28.360838021 +0000 UTC m=+144.155718015" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.402144 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.402590 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.902569509 +0000 UTC m=+144.697449493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.442452 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" podStartSLOduration=123.442425296 podStartE2EDuration="2m3.442425296s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.436017835 +0000 UTC m=+144.230897829" watchObservedRunningTime="2026-01-28 11:24:28.442425296 +0000 UTC m=+144.237305300" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.443511 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-97kr8" podStartSLOduration=5.44349714 podStartE2EDuration="5.44349714s" podCreationTimestamp="2026-01-28 11:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.396070246 +0000 UTC m=+144.190950230" watchObservedRunningTime="2026-01-28 11:24:28.44349714 +0000 UTC m=+144.238377124" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.505483 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.505926 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.005905116 +0000 UTC m=+144.800785100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.608445 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.608841 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.108825539 +0000 UTC m=+144.903705523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.668167 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.709730 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.710289 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.210260204 +0000 UTC m=+145.005140188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.811220 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.812265 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.312245587 +0000 UTC m=+145.107125571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.915844 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.916357 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.416333538 +0000 UTC m=+145.211213522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.018547 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.019044 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.519024774 +0000 UTC m=+145.313904758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.119095 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.120252 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.620227821 +0000 UTC m=+145.415107805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.180338 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:29 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:29 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:29 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.180772 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.180561 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" event={"ID":"609fd77d-7c9e-4a3f-855f-8aca45b53f4d","Type":"ContainerStarted","Data":"7aefdfad6e14f5246d58518b9d8bbb54908b7a27f6e772137d3f8ac3ad1d170a"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.193687 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xghdb" event={"ID":"bf13c867-7c3e-4845-a6c8-f25700c31666","Type":"ContainerStarted","Data":"be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.205607 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-slcp9" event={"ID":"43de728c-beeb-4fde-832b-dcf5097867e0","Type":"ContainerStarted","Data":"1a82d7ed0c45205bb923cb3978bff5ea93d09c04290ad9c41169c28a19b74f1e"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.225493 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.227468 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.727449595 +0000 UTC m=+145.522329579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.228184 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vc78g" event={"ID":"113634df-0b68-4670-8c3d-8d227c626095","Type":"ContainerStarted","Data":"1632bd26c2563dcc25789aed5bf9598cc88023eda6a5e61451e4daccac27db90"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.228252 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vc78g" event={"ID":"113634df-0b68-4670-8c3d-8d227c626095","Type":"ContainerStarted","Data":"e62b61efa71260f978b3c2c4a4dcf021a1e38d7ac4ce65a2c47defc605859bf0"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.236387 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" event={"ID":"d0c2686a-d8ed-4c34-8677-4371daf94ea4","Type":"ContainerStarted","Data":"fea7ef4cc668293eb4bf8b237f28d1966937276987d2224f86cc726aa1dafb22"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.251271 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" event={"ID":"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3","Type":"ContainerStarted","Data":"303863c11d583a696588c0039ac79e38b7285884c92ccebde06b11b5313c072e"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.267195 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" event={"ID":"9927b5d4-5460-4d78-9320-af3916443c1a","Type":"ContainerStarted","Data":"e214781e6dc1330c8de3e95aa27dff62bdfc02e93f36dcd1215852d3374f77d6"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.291573 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-xghdb" podStartSLOduration=124.291549155 podStartE2EDuration="2m4.291549155s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.289563101 +0000 UTC m=+145.084443105" watchObservedRunningTime="2026-01-28 11:24:29.291549155 +0000 UTC m=+145.086429139" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.311200 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" event={"ID":"61b65dc4-6aaf-4578-adf4-64759773196a","Type":"ContainerStarted","Data":"fb22c089271c5178d9c9e3aa5e583813de0002f1617ca7995a5a967788f6ab4f"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.329673 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.331474 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.831415962 +0000 UTC m=+145.626295946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.340560 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" event={"ID":"46da2b10-cba3-46fa-a2f3-972499966fd3","Type":"ContainerStarted","Data":"c52a93ee57d64c17d0c13644799fc0bc866276dac487ae35f364d3fbeb1299dd"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.352819 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vc78g" podStartSLOduration=6.352801024 podStartE2EDuration="6.352801024s" podCreationTimestamp="2026-01-28 11:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.351923355 +0000 UTC m=+145.146803339" watchObservedRunningTime="2026-01-28 11:24:29.352801024 +0000 UTC m=+145.147681008" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.363380 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" event={"ID":"c03ebf08-d5a0-48b4-a1ca-3eec30c14490","Type":"ContainerStarted","Data":"28776ff7a2258a3015a449e7086b23688da26cb504c3f194f4d3f166c257fb67"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.386061 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" event={"ID":"d56b6530-c7d7-432d-bd5e-1a07a2d94515","Type":"ContainerStarted","Data":"e29698ab75e7e02a47e44e9099d1296207853f543f444cd2dc63a10873278dc8"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.387445 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.391794 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" podStartSLOduration=123.391778841 podStartE2EDuration="2m3.391778841s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.389867028 +0000 UTC m=+145.184747012" watchObservedRunningTime="2026-01-28 11:24:29.391778841 +0000 UTC m=+145.186658825" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.396239 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" event={"ID":"e2b8b707-60c9-4138-a4d8-d218162737fe","Type":"ContainerStarted","Data":"f97ba040ca3e9cdae62ae1b9da5d4c0d116d0ff1b57ce8d6552b481d29ed03b5"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.398768 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" event={"ID":"2dce007c-8b8d-4271-bb40-7482176fc529","Type":"ContainerStarted","Data":"2909fdd8821b315791fdc7eb7dfd42870ff8631f1128ff5ff5afa29847b31a79"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.409704 4804 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wg94f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.409761 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" podUID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.415112 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" event={"ID":"1a74db24-5aca-48f9-889c-e37d8cdba99e","Type":"ContainerStarted","Data":"9909541a8b644dd2bf68670a3a60926fe93dd81ce5f6279b0b220f551453eaa1"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.422002 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-slln9" event={"ID":"9ad95836-c587-4ca7-b5fa-f878af1019b6","Type":"ContainerStarted","Data":"c32d2306f10484bb9231398f0fa50ecc60495be47b9d9bad2777ab191bf51e53"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.432768 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.438292 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" event={"ID":"625b312d-62b0-4965-966c-3605f4d649a4","Type":"ContainerStarted","Data":"47d63a959066a85b93c282b5d64604c46b7467edaf9c84205538e338580ecc2d"} Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.440188 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.940169867 +0000 UTC m=+145.735049851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.440171 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" podStartSLOduration=123.440146456 podStartE2EDuration="2m3.440146456s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.436573449 +0000 UTC m=+145.231453433" watchObservedRunningTime="2026-01-28 11:24:29.440146456 +0000 UTC m=+145.235026430" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.480759 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" event={"ID":"a7c281fd-3e5a-4edc-98f7-8703c1f08aab","Type":"ContainerStarted","Data":"aa7b3313d3e88ff5f3fdd3048e319f34a55e2d620ee8244d458903dc1287cf7a"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.482234 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" podStartSLOduration=123.482207534 podStartE2EDuration="2m3.482207534s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.473717616 +0000 UTC m=+145.268597620" watchObservedRunningTime="2026-01-28 11:24:29.482207534 +0000 UTC m=+145.277087518" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.498196 4804 generic.go:334] "Generic (PLEG): container finished" podID="881a5709-4ff6-448e-ba75-caf5f7e61a5b" containerID="3c30605d1847d4d33751ffcafc95368fecee3049a3e341e77df6c23f0a1fe3df" exitCode=0 Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.498468 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" event={"ID":"881a5709-4ff6-448e-ba75-caf5f7e61a5b","Type":"ContainerDied","Data":"3c30605d1847d4d33751ffcafc95368fecee3049a3e341e77df6c23f0a1fe3df"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.505223 4804 csr.go:261] certificate signing request csr-j9g9m is approved, waiting to be issued Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.516286 4804 csr.go:257] certificate signing request csr-j9g9m is issued Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.516956 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" event={"ID":"0f90e352-ac01-40fb-bf8d-50500206f0ac","Type":"ContainerStarted","Data":"e4684dc793c71b17d780522df34c599379cfb5bef8e16b4539106e9631eef623"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.519847 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" podStartSLOduration=123.519828378 podStartE2EDuration="2m3.519828378s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.51715591 +0000 UTC m=+145.312035894" watchObservedRunningTime="2026-01-28 11:24:29.519828378 +0000 UTC m=+145.314708362" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.530059 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" event={"ID":"f90c0f76-ca48-4b2f-89cc-b90cc1172576","Type":"ContainerStarted","Data":"523bc1964b4a6d866fc50bc6401fbc40295cf4a51484c2911b6363035e19f603"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.530776 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.543180 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.543585 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.043543685 +0000 UTC m=+145.838423669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.552172 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.549665 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" podStartSLOduration=123.549643765 podStartE2EDuration="2m3.549643765s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.548153616 +0000 UTC m=+145.343033600" watchObservedRunningTime="2026-01-28 11:24:29.549643765 +0000 UTC m=+145.344523749" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.556059 4804 generic.go:334] "Generic (PLEG): container finished" podID="65cbbd20-6185-455b-814b-7de34194ec29" containerID="b2b8867f191301517831ca2e719c7a54282c377393208f4b794e328d9d6b3e3b" exitCode=0 Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.556114 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" event={"ID":"65cbbd20-6185-455b-814b-7de34194ec29","Type":"ContainerDied","Data":"b2b8867f191301517831ca2e719c7a54282c377393208f4b794e328d9d6b3e3b"} Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.556453 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.056430778 +0000 UTC m=+145.851310762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.559761 4804 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n9ds8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.569069 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" podUID="f90c0f76-ca48-4b2f-89cc-b90cc1172576" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.592121 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" event={"ID":"6ba550eb-2fae-4448-9bc8-7c8ecf3de616","Type":"ContainerStarted","Data":"bdb87a4d01d1cdd6047b1a07d8021710f1cd0933374cbb22100d8df26605ca18"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.596327 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" podStartSLOduration=123.596283244 podStartE2EDuration="2m3.596283244s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.586046498 +0000 UTC m=+145.380926492" watchObservedRunningTime="2026-01-28 11:24:29.596283244 +0000 UTC m=+145.391163228" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.616210 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-slln9" podStartSLOduration=123.616184575 podStartE2EDuration="2m3.616184575s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.615751332 +0000 UTC m=+145.410631316" watchObservedRunningTime="2026-01-28 11:24:29.616184575 +0000 UTC m=+145.411064559" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.621205 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" event={"ID":"bb959019-0f9d-4210-8410-6b3c00b02337","Type":"ContainerStarted","Data":"4bceb0781d7092ea24802dae15015144fbb316afc1359d2ddad36759cb909c58"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.622329 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.630650 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" event={"ID":"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d","Type":"ContainerStarted","Data":"0fb8b126c781040de7adf6bf7928d003c933d812455f61bb1fbf4fd106abe81d"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.635419 4804 patch_prober.go:28] interesting pod/downloads-7954f5f757-cljd9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.635460 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cljd9" podUID="4e425cf1-0352-47be-9c58-2bad27ccc3c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.635595 4804 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-z4j56 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.635680 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.647237 4804 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ml79j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.647303 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.654494 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.657439 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.157410987 +0000 UTC m=+145.952290971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.726584 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" podStartSLOduration=123.726559243 podStartE2EDuration="2m3.726559243s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.725973294 +0000 UTC m=+145.520853278" watchObservedRunningTime="2026-01-28 11:24:29.726559243 +0000 UTC m=+145.521439227" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.756813 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.762389 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.262363347 +0000 UTC m=+146.057243531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.854065 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" podStartSLOduration=123.854016172 podStartE2EDuration="2m3.854016172s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.839531096 +0000 UTC m=+145.634411080" watchObservedRunningTime="2026-01-28 11:24:29.854016172 +0000 UTC m=+145.648896166" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.855982 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" podStartSLOduration=123.855971885 podStartE2EDuration="2m3.855971885s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.791616436 +0000 UTC m=+145.586496420" watchObservedRunningTime="2026-01-28 11:24:29.855971885 +0000 UTC m=+145.650851889" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.866688 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.867129 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.36710689 +0000 UTC m=+146.161986874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.971381 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.971740 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.471726138 +0000 UTC m=+146.266606122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.006469 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.104505 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.105151 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.605127921 +0000 UTC m=+146.400007905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.179558 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:30 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:30 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:30 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.179628 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.207004 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.207368 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.707354312 +0000 UTC m=+146.502234296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.309511 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.311177 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.811139763 +0000 UTC m=+146.606019747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.414087 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.415076 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.91479228 +0000 UTC m=+146.709672264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.515365 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.515514 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.015484121 +0000 UTC m=+146.810364105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.516041 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.516698 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.01666429 +0000 UTC m=+146.811544454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.521964 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 11:19:29 +0000 UTC, rotation deadline is 2026-10-19 17:13:13.097212243 +0000 UTC Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.522012 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6341h48m42.575204638s for next certificate rotation Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.617239 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.117212575 +0000 UTC m=+146.912092559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.617106 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.617782 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.618324 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.118311052 +0000 UTC m=+146.913191036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.675081 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" event={"ID":"5054f20f-444d-40e8-ad18-3515e1ff2638","Type":"ContainerStarted","Data":"79c1b2853938dfb6f36ee8ac10844c8a54c903878160557f1d100d526d8ed15d"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.677092 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.681099 4804 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-44lsd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.32:6443/healthz\": dial tcp 10.217.0.32:6443: connect: connection refused" start-of-body= Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.681172 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.32:6443/healthz\": dial tcp 10.217.0.32:6443: connect: connection refused" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.684226 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" event={"ID":"d0c2686a-d8ed-4c34-8677-4371daf94ea4","Type":"ContainerStarted","Data":"08e5f935d64639a4834695ccfdc8dd73b0b7645b5efe75f5bbc5d80e9f7742b5"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.684329 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" event={"ID":"d0c2686a-d8ed-4c34-8677-4371daf94ea4","Type":"ContainerStarted","Data":"21f445b12247db461476fb89d776ed5d90b41611135defcf774f096752b8ef72"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.695512 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" event={"ID":"349fe87d-e741-4dc4-bc78-322b541e0a3f","Type":"ContainerStarted","Data":"f29564c94936c996bc969fa08173301d4f19fb14727a22cc025b7ee06281451f"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.695920 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.700529 4804 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7ncgb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.700600 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" podUID="349fe87d-e741-4dc4-bc78-322b541e0a3f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.703686 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" event={"ID":"e2b8b707-60c9-4138-a4d8-d218162737fe","Type":"ContainerStarted","Data":"696c818d412dee542b594b3f7f141b8d7a946d118198acc75fa302924aa0633e"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.708779 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" event={"ID":"ae7433f6-40cb-4caf-8356-10bb93645af5","Type":"ContainerStarted","Data":"cc0257ab63b8ce14bac812eeb4ebcfe9baa7187c37d0e2df6e719355693b5895"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.717932 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" event={"ID":"456e451f-8bcc-49ad-a5e8-502c294e8518","Type":"ContainerStarted","Data":"165731136b6dba16f5966de2819861aff476cef91a2aa7d5362d75f28552dcec"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.721611 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.721867 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.221805544 +0000 UTC m=+147.016685518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.721955 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.722383 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.222375702 +0000 UTC m=+147.017255686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.734580 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" event={"ID":"609fd77d-7c9e-4a3f-855f-8aca45b53f4d","Type":"ContainerStarted","Data":"1040ad252fc2679fb72d3973423667857921d187dc706253779751b8df30668b"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.739528 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" podStartSLOduration=124.739511303 podStartE2EDuration="2m4.739511303s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:30.739014647 +0000 UTC m=+146.533894631" watchObservedRunningTime="2026-01-28 11:24:30.739511303 +0000 UTC m=+146.534391287" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.749659 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" event={"ID":"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b","Type":"ContainerStarted","Data":"ebe9e57661d63c96b8fd353a537d733c19e6e3d4e1c20d7c46889fbfaffc3d6b"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.750865 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.752535 4804 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2xbh5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.752621 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" podUID="3a69cec6-e1b7-4e4d-88f7-de85e459ed7b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.768579 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" event={"ID":"65cbbd20-6185-455b-814b-7de34194ec29","Type":"ContainerStarted","Data":"50d11c9040bc9b6079b03d0b0e72d9153589f071266fdff2a811f4fe2d4d54c9"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.779965 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" event={"ID":"625b312d-62b0-4965-966c-3605f4d649a4","Type":"ContainerStarted","Data":"26ea0f4cae28c7f181de8c65eda71ec173d3b499e7e3eb9fc6402df34ccba462"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.790270 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" event={"ID":"6ba550eb-2fae-4448-9bc8-7c8ecf3de616","Type":"ContainerStarted","Data":"07c3cf02afa453704d9515a6c925e76a3631b521ac7192734fcad743048f1518"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.790341 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" event={"ID":"6ba550eb-2fae-4448-9bc8-7c8ecf3de616","Type":"ContainerStarted","Data":"8207bf0764fe6a56f9e52bcbcc7d4553ca54b56a34a082f3e7c9fc99daac07a5"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.807041 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" podStartSLOduration=124.807022226 podStartE2EDuration="2m4.807022226s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:30.806016213 +0000 UTC m=+146.600896197" watchObservedRunningTime="2026-01-28 11:24:30.807022226 +0000 UTC m=+146.601902210" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.809251 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" event={"ID":"9927b5d4-5460-4d78-9320-af3916443c1a","Type":"ContainerStarted","Data":"ca24be1a7427663c6a1037c4165004b43fdaf4ac41364af4082d0673b92250f2"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.822934 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" event={"ID":"50a5d490-28ef-438f-b03c-6b15d30bbb1e","Type":"ContainerStarted","Data":"6f31dd9e450b57f26593c55c9ee08606b75d55be1569497d2d751448a3e0a9f0"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.823004 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" event={"ID":"50a5d490-28ef-438f-b03c-6b15d30bbb1e","Type":"ContainerStarted","Data":"2fb2c501ae8507ec5241c44f77cec01654f34450e4d8b3de5187dd5ab29b3151"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.824543 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.827305 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.327255589 +0000 UTC m=+147.122135583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.842280 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" event={"ID":"881a5709-4ff6-448e-ba75-caf5f7e61a5b","Type":"ContainerStarted","Data":"bc8eaf7239940bfe6a5561c4bfaaa817ba0282ae7eb954fe8492921c08cb8d94"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.856554 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" event={"ID":"46da2b10-cba3-46fa-a2f3-972499966fd3","Type":"ContainerStarted","Data":"94f50d2fb3e1793d3de9237b4fc9967b585a6cb26ec523bfe417f73119ba6b2e"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.869996 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" event={"ID":"c4cdff00-d1aa-4535-b269-b692986cd76c","Type":"ContainerStarted","Data":"22a806fce53fd0eaa60c0d6286732b7e11b99ca72b6bdfbf096b80542aba1032"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.880000 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.880362 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.906725 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" podStartSLOduration=125.906705043 podStartE2EDuration="2m5.906705043s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:30.904856093 +0000 UTC m=+146.699736077" watchObservedRunningTime="2026-01-28 11:24:30.906705043 +0000 UTC m=+146.701585267" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.911933 4804 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2kmn2 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.22:8443/livez\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.912006 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" podUID="881a5709-4ff6-448e-ba75-caf5f7e61a5b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.22:8443/livez\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.912191 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-slcp9" event={"ID":"43de728c-beeb-4fde-832b-dcf5097867e0","Type":"ContainerStarted","Data":"0c0dc66ac9be66325e7e4dcbe6abef69c55b8943ba19d2dd4536dfc36ada4c0b"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.912685 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.929801 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.932352 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.432330433 +0000 UTC m=+147.227210627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.969754 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.969802 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" event={"ID":"a7c281fd-3e5a-4edc-98f7-8703c1f08aab","Type":"ContainerStarted","Data":"55976ebceac6834076b5e889919819c6e43328a0039fab2e0659540494c6445a"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.997290 4804 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ml79j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.997399 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.023310 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.023540 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.034121 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" podStartSLOduration=125.034100019 podStartE2EDuration="2m5.034100019s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.029349343 +0000 UTC m=+146.824229327" watchObservedRunningTime="2026-01-28 11:24:31.034100019 +0000 UTC m=+146.828979993" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.035155 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" podStartSLOduration=125.035149974 podStartE2EDuration="2m5.035149974s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:30.982451716 +0000 UTC m=+146.777331720" watchObservedRunningTime="2026-01-28 11:24:31.035149974 +0000 UTC m=+146.830029958" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.036743 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.040434 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.540397935 +0000 UTC m=+147.335277919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.054904 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.059088 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.559066967 +0000 UTC m=+147.353946951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.119515 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" podStartSLOduration=125.119484027 podStartE2EDuration="2m5.119484027s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.118141413 +0000 UTC m=+146.913021407" watchObservedRunningTime="2026-01-28 11:24:31.119484027 +0000 UTC m=+146.914364011" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.157763 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.160173 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.66013916 +0000 UTC m=+147.455019144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.175066 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" podStartSLOduration=125.175020378 podStartE2EDuration="2m5.175020378s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.171513173 +0000 UTC m=+146.966393157" watchObservedRunningTime="2026-01-28 11:24:31.175020378 +0000 UTC m=+146.969900362" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.187870 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:31 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:31 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:31 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.187968 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.211900 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" podStartSLOduration=125.211864535 podStartE2EDuration="2m5.211864535s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.208173484 +0000 UTC m=+147.003053468" watchObservedRunningTime="2026-01-28 11:24:31.211864535 +0000 UTC m=+147.006744519" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.260029 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.260357 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.760343974 +0000 UTC m=+147.555223958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.301501 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" podStartSLOduration=126.301473502 podStartE2EDuration="2m6.301473502s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.298698571 +0000 UTC m=+147.093578555" watchObservedRunningTime="2026-01-28 11:24:31.301473502 +0000 UTC m=+147.096353486" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.363562 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.364716 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.864691884 +0000 UTC m=+147.659571868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.383434 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" podStartSLOduration=125.383405737 podStartE2EDuration="2m5.383405737s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.339398765 +0000 UTC m=+147.134278769" watchObservedRunningTime="2026-01-28 11:24:31.383405737 +0000 UTC m=+147.178285721" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.384062 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" podStartSLOduration=125.384051109 podStartE2EDuration="2m5.384051109s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.376577854 +0000 UTC m=+147.171457828" watchObservedRunningTime="2026-01-28 11:24:31.384051109 +0000 UTC m=+147.178931123" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.419892 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-slcp9" podStartSLOduration=8.419848602 podStartE2EDuration="8.419848602s" podCreationTimestamp="2026-01-28 11:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.414433234 +0000 UTC m=+147.209313218" watchObservedRunningTime="2026-01-28 11:24:31.419848602 +0000 UTC m=+147.214728596" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.467795 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.468137 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.968121694 +0000 UTC m=+147.763001678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.518955 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" podStartSLOduration=126.518926739 podStartE2EDuration="2m6.518926739s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.517988928 +0000 UTC m=+147.312868912" watchObservedRunningTime="2026-01-28 11:24:31.518926739 +0000 UTC m=+147.313806753" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.573413 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.573786 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.073767176 +0000 UTC m=+147.868647160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.612836 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" podStartSLOduration=125.612794815 podStartE2EDuration="2m5.612794815s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.612234757 +0000 UTC m=+147.407114741" watchObservedRunningTime="2026-01-28 11:24:31.612794815 +0000 UTC m=+147.407674799" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.675349 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.676260 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.176239216 +0000 UTC m=+147.971119190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.688510 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" podStartSLOduration=125.688491557 podStartE2EDuration="2m5.688491557s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.647421971 +0000 UTC m=+147.442301955" watchObservedRunningTime="2026-01-28 11:24:31.688491557 +0000 UTC m=+147.483371541" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.776542 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.777252 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.277235436 +0000 UTC m=+148.072115420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.878762 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.879523 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.879723 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.879920 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.37990196 +0000 UTC m=+148.174781944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.948210 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" event={"ID":"65cbbd20-6185-455b-814b-7de34194ec29","Type":"ContainerStarted","Data":"1e7f73bb71919aa179c2c8e1a5de137b2caec6edbe00500cb701c732c3a9e8ce"} Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.949479 4804 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ml79j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.949514 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.971757 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.980984 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.981307 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.481262743 +0000 UTC m=+148.276142727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.981447 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.981687 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.981857 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.982007 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.982556 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.482539084 +0000 UTC m=+148.277419108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.991920 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.000599 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.001722 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" podStartSLOduration=126.001711832 podStartE2EDuration="2m6.001711832s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.690380869 +0000 UTC m=+147.485260863" watchObservedRunningTime="2026-01-28 11:24:32.001711832 +0000 UTC m=+147.796591816" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.002191 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" podStartSLOduration=126.002186748 podStartE2EDuration="2m6.002186748s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:32.001130474 +0000 UTC m=+147.796010448" watchObservedRunningTime="2026-01-28 11:24:32.002186748 +0000 UTC m=+147.797066732" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.003846 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.089395 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.091040 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.591017919 +0000 UTC m=+148.385897903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.166078 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.177145 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.178388 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gw5tb"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.179551 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.189163 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.191321 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.192133 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:32 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:32 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:32 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.192193 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.192662 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.205968 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.705946916 +0000 UTC m=+148.500826900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.249172 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gw5tb"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.294300 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.294563 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-utilities\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.294641 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wms6l\" (UniqueName: \"kubernetes.io/projected/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-kube-api-access-wms6l\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.294676 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-catalog-content\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.294764 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.794749958 +0000 UTC m=+148.589629942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.357101 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.395613 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wms6l\" (UniqueName: \"kubernetes.io/projected/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-kube-api-access-wms6l\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.395672 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-catalog-content\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.395712 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-utilities\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.395754 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.396041 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.896028847 +0000 UTC m=+148.690908831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.396718 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-catalog-content\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.397011 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-utilities\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.448585 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hzmvb"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.453577 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.493558 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hzmvb"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.497390 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.497531 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.997510993 +0000 UTC m=+148.792390977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.497668 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-catalog-content\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.497725 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdvxr\" (UniqueName: \"kubernetes.io/projected/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-kube-api-access-zdvxr\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.497755 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.497821 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-utilities\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.498141 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.998133363 +0000 UTC m=+148.793013347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.499695 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wms6l\" (UniqueName: \"kubernetes.io/projected/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-kube-api-access-wms6l\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.510034 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.548808 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.599598 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.599932 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-catalog-content\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.599979 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdvxr\" (UniqueName: \"kubernetes.io/projected/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-kube-api-access-zdvxr\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.600050 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-utilities\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.600531 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-utilities\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.600615 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.100595121 +0000 UTC m=+148.895475105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.600861 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-catalog-content\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.662328 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-48gg7"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.669985 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.700688 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdvxr\" (UniqueName: \"kubernetes.io/projected/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-kube-api-access-zdvxr\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.701353 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.701628 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.201612492 +0000 UTC m=+148.996492466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.737355 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48gg7"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.779196 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.811503 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.811735 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6sc7\" (UniqueName: \"kubernetes.io/projected/23f32834-88e4-454d-81fe-6370a2bc8e0b-kube-api-access-l6sc7\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.811797 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-utilities\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.811822 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-catalog-content\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.811981 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.311965019 +0000 UTC m=+149.106845003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.830510 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kvdtx"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.831720 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.886645 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kvdtx"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.917098 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wwt7\" (UniqueName: \"kubernetes.io/projected/4ad471e3-4346-4464-94bf-778299801fe4-kube-api-access-9wwt7\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.917636 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-utilities\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.917732 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-catalog-content\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.917816 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-utilities\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.917950 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.918026 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6sc7\" (UniqueName: \"kubernetes.io/projected/23f32834-88e4-454d-81fe-6370a2bc8e0b-kube-api-access-l6sc7\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.918104 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-catalog-content\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.918687 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-utilities\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.919033 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-catalog-content\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.919433 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.419416831 +0000 UTC m=+149.214296815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.954995 4804 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-44lsd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.32:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.955211 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.32:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.972844 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6sc7\" (UniqueName: \"kubernetes.io/projected/23f32834-88e4-454d-81fe-6370a2bc8e0b-kube-api-access-l6sc7\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.001384 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.002099 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.010795 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.011099 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.020805 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.021191 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-utilities\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.021263 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-catalog-content\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.021311 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wwt7\" (UniqueName: \"kubernetes.io/projected/4ad471e3-4346-4464-94bf-778299801fe4-kube-api-access-9wwt7\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.021658 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.521617751 +0000 UTC m=+149.316497735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.022294 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-utilities\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.022464 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-catalog-content\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.048294 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.054339 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.079413 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wwt7\" (UniqueName: \"kubernetes.io/projected/4ad471e3-4346-4464-94bf-778299801fe4-kube-api-access-9wwt7\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.125649 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.125871 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.125914 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.126350 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.626337453 +0000 UTC m=+149.421217437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.175666 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" event={"ID":"609fd77d-7c9e-4a3f-855f-8aca45b53f4d","Type":"ContainerStarted","Data":"79c972b68dd0407ad190f5e389c998aa8f50ba7e67254fb24302fd6cf0cfe94b"} Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.175718 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" event={"ID":"609fd77d-7c9e-4a3f-855f-8aca45b53f4d","Type":"ContainerStarted","Data":"c72a9012ebef3c27109c781d40de661745812bdc4a5532f3b04d5473d5d61e2a"} Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.193218 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:33 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:33 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:33 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.193286 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.206113 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.221733 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.235088 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.235687 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.235727 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.275831 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.293092 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.793062017 +0000 UTC m=+149.587942001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.346376 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.349168 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.849150716 +0000 UTC m=+149.644030700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.361187 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.438171 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.449064 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.449642 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.949616178 +0000 UTC m=+149.744496162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.550964 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.551473 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.051455767 +0000 UTC m=+149.846335751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: W0128 11:24:33.622302 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-1930818bca9aae618d0ce8587923062cb2efdff40965af94afde97f68ede81fb WatchSource:0}: Error finding container 1930818bca9aae618d0ce8587923062cb2efdff40965af94afde97f68ede81fb: Status 404 returned error can't find the container with id 1930818bca9aae618d0ce8587923062cb2efdff40965af94afde97f68ede81fb Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.656850 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.657809 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.157783622 +0000 UTC m=+149.952663606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.758496 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.758823 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.258811283 +0000 UTC m=+150.053691257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.863363 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.863772 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.363749752 +0000 UTC m=+150.158629726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.915936 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gw5tb"] Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.931573 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kvdtx"] Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.965900 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.966220 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.46620891 +0000 UTC m=+150.261088884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.995130 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hzmvb"] Jan 28 11:24:34 crc kubenswrapper[4804]: W0128 11:24:34.042393 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a8d8bca_1ae3_44d1_9793_29fc2a2f5e8d.slice/crio-ec04856dfe2459bdae75866159a6a9081b3f707d9e9a839eb94cb2acf0e4e3d1 WatchSource:0}: Error finding container ec04856dfe2459bdae75866159a6a9081b3f707d9e9a839eb94cb2acf0e4e3d1: Status 404 returned error can't find the container with id ec04856dfe2459bdae75866159a6a9081b3f707d9e9a839eb94cb2acf0e4e3d1 Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.068679 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.069048 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.56902683 +0000 UTC m=+150.363906814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.177700 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.178041 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.678028343 +0000 UTC m=+150.472908327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.180600 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerStarted","Data":"ec04856dfe2459bdae75866159a6a9081b3f707d9e9a839eb94cb2acf0e4e3d1"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.188847 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:34 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:34 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:34 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.188978 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.204446 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"acda3dbec85d3fa0ee275eb6129c2c813f4e3844d964668a69753eb930b9adf5"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.204507 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c4a9cf468c78cf8a7077fb201a18268b7f6871a942cd86bef6a0a81214408c7a"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.244958 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e961f35e039ee65dcb5d21f6c328c81255264a7e828ea30c312b2947d5d33dff"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.245009 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1930818bca9aae618d0ce8587923062cb2efdff40965af94afde97f68ede81fb"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.258402 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerStarted","Data":"f2d704f75cce250d039d0dd04e24016c6014cdedb092df3fd7df1955f57ab50a"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.279695 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.280383 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.780355826 +0000 UTC m=+150.575235840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.309728 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48gg7"] Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.324245 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" event={"ID":"609fd77d-7c9e-4a3f-855f-8aca45b53f4d","Type":"ContainerStarted","Data":"fe87ae707136fde84e732e2eb14bec7211f95794ae9505a267289bdc0f27bdfc"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.347267 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"99427eca64f1fb6963924a5df595bed5dcf9ca2e6752fe7aa27983447bc5452a"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.347328 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ea4b5ed441b759e15e596e88595a84ee67c7e289506e79699b2d4de856083eb0"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.348159 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.352741 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw5tb" event={"ID":"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d","Type":"ContainerStarted","Data":"60c5c3bae740bf47c18e8908e6f28f0a1a7fe1ff6bab40703594d2789651297c"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.387183 4804 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.387997 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.389407 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.8893926 +0000 UTC m=+150.684272584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.394990 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" podStartSLOduration=11.394955763 podStartE2EDuration="11.394955763s" podCreationTimestamp="2026-01-28 11:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:34.374648527 +0000 UTC m=+150.169528511" watchObservedRunningTime="2026-01-28 11:24:34.394955763 +0000 UTC m=+150.189835747" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.488639 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.490290 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.990270606 +0000 UTC m=+150.785150590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.530830 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.562425 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9b7c6"] Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.563494 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.571710 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.591054 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.591464 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.091448892 +0000 UTC m=+150.886328876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.596281 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b7c6"] Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.693142 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.693305 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.19327074 +0000 UTC m=+150.988150724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.693553 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-catalog-content\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.693650 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.693696 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-utilities\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.693740 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qz4p\" (UniqueName: \"kubernetes.io/projected/6caae643-ab85-4628-bcb1-9c0ecc48c568-kube-api-access-4qz4p\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.694245 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.194228572 +0000 UTC m=+150.989108556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.795501 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.795757 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-utilities\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.795800 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qz4p\" (UniqueName: \"kubernetes.io/projected/6caae643-ab85-4628-bcb1-9c0ecc48c568-kube-api-access-4qz4p\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.795825 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-catalog-content\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.796253 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-catalog-content\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.796326 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.296311657 +0000 UTC m=+151.091191641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.796534 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-utilities\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.820734 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qz4p\" (UniqueName: \"kubernetes.io/projected/6caae643-ab85-4628-bcb1-9c0ecc48c568-kube-api-access-4qz4p\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.888875 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.900611 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.901106 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.401086721 +0000 UTC m=+151.195966765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.968727 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4842n"] Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.970217 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.983630 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4842n"] Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.026595 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.027404 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-catalog-content\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.027442 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-utilities\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.027521 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gn5v\" (UniqueName: \"kubernetes.io/projected/ac859130-1b71-4993-ab3d-66600459a32a-kube-api-access-4gn5v\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: E0128 11:24:35.027972 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.527945379 +0000 UTC m=+151.322825363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.137383 4804 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T11:24:34.387207028Z","Handler":null,"Name":""} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.139193 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.139220 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gn5v\" (UniqueName: \"kubernetes.io/projected/ac859130-1b71-4993-ab3d-66600459a32a-kube-api-access-4gn5v\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.139274 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-catalog-content\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.139293 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-utilities\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.139678 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-utilities\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: E0128 11:24:35.139924 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.639913338 +0000 UTC m=+151.434793322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.140402 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-catalog-content\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.156020 4804 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.156056 4804 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.172055 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gn5v\" (UniqueName: \"kubernetes.io/projected/ac859130-1b71-4993-ab3d-66600459a32a-kube-api-access-4gn5v\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.181123 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:35 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:35 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:35 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.181168 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.242417 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.279276 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.322187 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b7c6"] Jan 28 11:24:35 crc kubenswrapper[4804]: W0128 11:24:35.330052 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6caae643_ab85_4628_bcb1_9c0ecc48c568.slice/crio-1e8bd873fb7adcc76814d0eeeb9b78c6d6981cbd42db2825d7cfc8757dac3b5e WatchSource:0}: Error finding container 1e8bd873fb7adcc76814d0eeeb9b78c6d6981cbd42db2825d7cfc8757dac3b5e: Status 404 returned error can't find the container with id 1e8bd873fb7adcc76814d0eeeb9b78c6d6981cbd42db2825d7cfc8757dac3b5e Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.343562 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.348366 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jmw4q"] Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.348949 4804 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.348993 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.350001 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.354349 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.362047 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmw4q"] Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.369645 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.383565 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"24e7f4b9-abfc-4b9b-929b-1288abb63cc2","Type":"ContainerStarted","Data":"9f287564e4a6e36a41c42ef4b3439552b58a68748c72b11144f6fcfbe9b02cd5"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.383609 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"24e7f4b9-abfc-4b9b-929b-1288abb63cc2","Type":"ContainerStarted","Data":"07feb2bd10defcd28c969c63a5ad2b4221cd779c52d294146538a8b53582f860"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.396478 4804 generic.go:334] "Generic (PLEG): container finished" podID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerID="db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1" exitCode=0 Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.396538 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw5tb" event={"ID":"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d","Type":"ContainerDied","Data":"db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.398240 4804 generic.go:334] "Generic (PLEG): container finished" podID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerID="224ba74fdc92a764e31b68f322cd68766ad88b0938c015d6c3219ec78f441a34" exitCode=0 Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.398284 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerDied","Data":"224ba74fdc92a764e31b68f322cd68766ad88b0938c015d6c3219ec78f441a34"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.400720 4804 generic.go:334] "Generic (PLEG): container finished" podID="4ad471e3-4346-4464-94bf-778299801fe4" containerID="46ebddf77e338edea495290c557790d95f2de2df53a4e7134b3e39d453fa17af" exitCode=0 Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.400764 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerDied","Data":"46ebddf77e338edea495290c557790d95f2de2df53a4e7134b3e39d453fa17af"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.421212 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.424589 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.424550568 podStartE2EDuration="3.424550568s" podCreationTimestamp="2026-01-28 11:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:35.409606928 +0000 UTC m=+151.204486912" watchObservedRunningTime="2026-01-28 11:24:35.424550568 +0000 UTC m=+151.219430552" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.429523 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.435953 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerStarted","Data":"1e8bd873fb7adcc76814d0eeeb9b78c6d6981cbd42db2825d7cfc8757dac3b5e"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.439215 4804 generic.go:334] "Generic (PLEG): container finished" podID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerID="3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98" exitCode=0 Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.441419 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerDied","Data":"3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.441521 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerStarted","Data":"306a58f4bdfd74cc31f69b2bdc88525986d7ff5e31a732a9de2866902df8686e"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.451832 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-catalog-content\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.452457 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-utilities\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.452521 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dklnt\" (UniqueName: \"kubernetes.io/projected/b641b655-0d3e-4838-8c87-fc72873f1944-kube-api-access-dklnt\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.550061 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nw6s2"] Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.551380 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.555369 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-catalog-content\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.555860 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-utilities\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.555898 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dklnt\" (UniqueName: \"kubernetes.io/projected/b641b655-0d3e-4838-8c87-fc72873f1944-kube-api-access-dklnt\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.561335 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-catalog-content\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.561635 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-utilities\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.567029 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nw6s2"] Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.590215 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dklnt\" (UniqueName: \"kubernetes.io/projected/b641b655-0d3e-4838-8c87-fc72873f1944-kube-api-access-dklnt\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.659504 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n2zg\" (UniqueName: \"kubernetes.io/projected/759bdf85-0cca-46db-8126-fab61a8664a8-kube-api-access-9n2zg\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.659625 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-catalog-content\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.659666 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-utilities\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.689029 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4842n"] Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.700228 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.709375 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:35 crc kubenswrapper[4804]: E0128 11:24:35.713002 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod24e7f4b9_abfc_4b9b_929b_1288abb63cc2.slice/crio-9f287564e4a6e36a41c42ef4b3439552b58a68748c72b11144f6fcfbe9b02cd5.scope\": RecentStats: unable to find data in memory cache]" Jan 28 11:24:35 crc kubenswrapper[4804]: W0128 11:24:35.737512 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac859130_1b71_4993_ab3d_66600459a32a.slice/crio-035e3af2de47d3eb7c5ac704bd99c258ff7c2e427ae366507d1020ab4549c195 WatchSource:0}: Error finding container 035e3af2de47d3eb7c5ac704bd99c258ff7c2e427ae366507d1020ab4549c195: Status 404 returned error can't find the container with id 035e3af2de47d3eb7c5ac704bd99c258ff7c2e427ae366507d1020ab4549c195 Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.761923 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n2zg\" (UniqueName: \"kubernetes.io/projected/759bdf85-0cca-46db-8126-fab61a8664a8-kube-api-access-9n2zg\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.762044 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-catalog-content\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.762076 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-utilities\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.762686 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-utilities\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.762870 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-catalog-content\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.792310 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n2zg\" (UniqueName: \"kubernetes.io/projected/759bdf85-0cca-46db-8126-fab61a8664a8-kube-api-access-9n2zg\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.843059 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.884131 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.895710 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.929669 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.930328 4804 patch_prober.go:28] interesting pod/downloads-7954f5f757-cljd9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.930337 4804 patch_prober.go:28] interesting pod/downloads-7954f5f757-cljd9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.930375 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cljd9" podUID="4e425cf1-0352-47be-9c58-2bad27ccc3c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.930408 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cljd9" podUID="4e425cf1-0352-47be-9c58-2bad27ccc3c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.941012 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.941828 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.958343 4804 patch_prober.go:28] interesting pod/console-f9d7485db-xghdb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.958405 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xghdb" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.984684 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.984723 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.020495 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmw4q"] Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.027574 4804 patch_prober.go:28] interesting pod/apiserver-76f77b778f-vbjk6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]log ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]etcd ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/max-in-flight-filter ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 28 11:24:36 crc kubenswrapper[4804]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 28 11:24:36 crc kubenswrapper[4804]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/project.openshift.io-projectcache ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/openshift.io-startinformers ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 11:24:36 crc kubenswrapper[4804]: livez check failed Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.027640 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" podUID="65cbbd20-6185-455b-814b-7de34194ec29" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.175400 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.181682 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:36 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:36 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:36 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.181740 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.214689 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-src4s"] Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.466015 4804 generic.go:334] "Generic (PLEG): container finished" podID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerID="04c43db3e70bb20141e7892290639067d3851e183e916843eb2d0aab2b130c9a" exitCode=0 Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.466119 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerDied","Data":"04c43db3e70bb20141e7892290639067d3851e183e916843eb2d0aab2b130c9a"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.472159 4804 generic.go:334] "Generic (PLEG): container finished" podID="24e7f4b9-abfc-4b9b-929b-1288abb63cc2" containerID="9f287564e4a6e36a41c42ef4b3439552b58a68748c72b11144f6fcfbe9b02cd5" exitCode=0 Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.472283 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"24e7f4b9-abfc-4b9b-929b-1288abb63cc2","Type":"ContainerDied","Data":"9f287564e4a6e36a41c42ef4b3439552b58a68748c72b11144f6fcfbe9b02cd5"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.488393 4804 generic.go:334] "Generic (PLEG): container finished" podID="b641b655-0d3e-4838-8c87-fc72873f1944" containerID="38d5811043b3f5ad798e66586c4ba52ca430539e3b5096297f2d0e1b1b72ab80" exitCode=0 Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.488503 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerDied","Data":"38d5811043b3f5ad798e66586c4ba52ca430539e3b5096297f2d0e1b1b72ab80"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.488543 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerStarted","Data":"b9f8fd7843e0d657401a449864e7360a08eaacd9d3a996600b88abc62b6de5e9"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.505545 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" event={"ID":"436e3017-a787-4e60-97cd-7cc0cdd47a2d","Type":"ContainerStarted","Data":"21c407385a0e63e468749b798e82d759e0bd8cab55527e3595f2c32049181c1c"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.510760 4804 generic.go:334] "Generic (PLEG): container finished" podID="ac859130-1b71-4993-ab3d-66600459a32a" containerID="f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f" exitCode=0 Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.510955 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerDied","Data":"f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.511029 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerStarted","Data":"035e3af2de47d3eb7c5ac704bd99c258ff7c2e427ae366507d1020ab4549c195"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.544556 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nw6s2"] Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.595811 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.943167 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.180452 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:37 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:37 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:37 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.180516 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.582828 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" event={"ID":"436e3017-a787-4e60-97cd-7cc0cdd47a2d","Type":"ContainerStarted","Data":"83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191"} Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.583665 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.638188 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" podStartSLOduration=131.63816137 podStartE2EDuration="2m11.63816137s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:37.633394734 +0000 UTC m=+153.428274718" watchObservedRunningTime="2026-01-28 11:24:37.63816137 +0000 UTC m=+153.433041344" Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.642584 4804 generic.go:334] "Generic (PLEG): container finished" podID="759bdf85-0cca-46db-8126-fab61a8664a8" containerID="7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc" exitCode=0 Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.642646 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6s2" event={"ID":"759bdf85-0cca-46db-8126-fab61a8664a8","Type":"ContainerDied","Data":"7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc"} Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.642696 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6s2" event={"ID":"759bdf85-0cca-46db-8126-fab61a8664a8","Type":"ContainerStarted","Data":"8508960517ab52e83d2de6d52c76bf4bc148c42531ea9ecd0a9fb9ecc845cace"} Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.087537 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.118823 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kubelet-dir\") pod \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.118896 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kube-api-access\") pod \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.121714 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "24e7f4b9-abfc-4b9b-929b-1288abb63cc2" (UID: "24e7f4b9-abfc-4b9b-929b-1288abb63cc2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.129042 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "24e7f4b9-abfc-4b9b-929b-1288abb63cc2" (UID: "24e7f4b9-abfc-4b9b-929b-1288abb63cc2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.180039 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:38 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:38 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:38 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.180168 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.221505 4804 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.221544 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.589683 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 11:24:38 crc kubenswrapper[4804]: E0128 11:24:38.589961 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e7f4b9-abfc-4b9b-929b-1288abb63cc2" containerName="pruner" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.589973 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e7f4b9-abfc-4b9b-929b-1288abb63cc2" containerName="pruner" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.590097 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e7f4b9-abfc-4b9b-929b-1288abb63cc2" containerName="pruner" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.590471 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.596287 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.596871 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.616028 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.628934 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.629016 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.676479 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"24e7f4b9-abfc-4b9b-929b-1288abb63cc2","Type":"ContainerDied","Data":"07feb2bd10defcd28c969c63a5ad2b4221cd779c52d294146538a8b53582f860"} Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.676560 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07feb2bd10defcd28c969c63a5ad2b4221cd779c52d294146538a8b53582f860" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.677534 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.731410 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.731948 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.732070 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.771861 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.951775 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:39 crc kubenswrapper[4804]: I0128 11:24:39.181151 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:39 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:39 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:39 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:39 crc kubenswrapper[4804]: I0128 11:24:39.181213 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:39 crc kubenswrapper[4804]: I0128 11:24:39.507935 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 11:24:39 crc kubenswrapper[4804]: I0128 11:24:39.699140 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a","Type":"ContainerStarted","Data":"d129e38a5b690123ccc9fe380f07527c9a27b1434082899a97ca0ad67cfe6489"} Jan 28 11:24:40 crc kubenswrapper[4804]: I0128 11:24:40.179386 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:40 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:40 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:40 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:40 crc kubenswrapper[4804]: I0128 11:24:40.179453 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:40 crc kubenswrapper[4804]: I0128 11:24:40.721725 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a","Type":"ContainerStarted","Data":"ee523b44950b057c5d63cccbbd87eca5b1c5a29dacf9deee8a6e2ecaee6f9f0f"} Jan 28 11:24:40 crc kubenswrapper[4804]: I0128 11:24:40.740213 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.74017535 podStartE2EDuration="2.74017535s" podCreationTimestamp="2026-01-28 11:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:40.738806165 +0000 UTC m=+156.533686169" watchObservedRunningTime="2026-01-28 11:24:40.74017535 +0000 UTC m=+156.535055334" Jan 28 11:24:40 crc kubenswrapper[4804]: I0128 11:24:40.956481 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:40.996146 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.002769 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.188487 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:41 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:41 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:41 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.188544 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.754522 4804 generic.go:334] "Generic (PLEG): container finished" podID="ae7433f6-40cb-4caf-8356-10bb93645af5" containerID="cc0257ab63b8ce14bac812eeb4ebcfe9baa7187c37d0e2df6e719355693b5895" exitCode=0 Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.754595 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" event={"ID":"ae7433f6-40cb-4caf-8356-10bb93645af5","Type":"ContainerDied","Data":"cc0257ab63b8ce14bac812eeb4ebcfe9baa7187c37d0e2df6e719355693b5895"} Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.758960 4804 generic.go:334] "Generic (PLEG): container finished" podID="4435f03e-0012-4f98-87b6-7f7dc2e0fd6a" containerID="ee523b44950b057c5d63cccbbd87eca5b1c5a29dacf9deee8a6e2ecaee6f9f0f" exitCode=0 Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.759930 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a","Type":"ContainerDied","Data":"ee523b44950b057c5d63cccbbd87eca5b1c5a29dacf9deee8a6e2ecaee6f9f0f"} Jan 28 11:24:42 crc kubenswrapper[4804]: I0128 11:24:42.179015 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:42 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:42 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:42 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:42 crc kubenswrapper[4804]: I0128 11:24:42.179067 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:42 crc kubenswrapper[4804]: I0128 11:24:42.583242 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:24:42 crc kubenswrapper[4804]: I0128 11:24:42.584075 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:24:43 crc kubenswrapper[4804]: I0128 11:24:43.179226 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:43 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:43 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:43 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:43 crc kubenswrapper[4804]: I0128 11:24:43.179281 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:44 crc kubenswrapper[4804]: I0128 11:24:44.178746 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:44 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:44 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:44 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:44 crc kubenswrapper[4804]: I0128 11:24:44.179284 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:45 crc kubenswrapper[4804]: I0128 11:24:45.179675 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:45 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:45 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:45 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:45 crc kubenswrapper[4804]: I0128 11:24:45.179747 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:45 crc kubenswrapper[4804]: I0128 11:24:45.939023 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:45 crc kubenswrapper[4804]: I0128 11:24:45.940968 4804 patch_prober.go:28] interesting pod/console-f9d7485db-xghdb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 28 11:24:45 crc kubenswrapper[4804]: I0128 11:24:45.941031 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xghdb" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 28 11:24:46 crc kubenswrapper[4804]: I0128 11:24:46.183063 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:46 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:46 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:46 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:46 crc kubenswrapper[4804]: I0128 11:24:46.183635 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:46 crc kubenswrapper[4804]: I0128 11:24:46.994657 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.001146 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.143438 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.179939 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:47 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:47 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:47 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.180027 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.866796 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-4wpb6_46da2b10-cba3-46fa-a2f3-972499966fd3/cluster-samples-operator/0.log" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.866847 4804 generic.go:334] "Generic (PLEG): container finished" podID="46da2b10-cba3-46fa-a2f3-972499966fd3" containerID="c52a93ee57d64c17d0c13644799fc0bc866276dac487ae35f364d3fbeb1299dd" exitCode=2 Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.866904 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" event={"ID":"46da2b10-cba3-46fa-a2f3-972499966fd3","Type":"ContainerDied","Data":"c52a93ee57d64c17d0c13644799fc0bc866276dac487ae35f364d3fbeb1299dd"} Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.867393 4804 scope.go:117] "RemoveContainer" containerID="c52a93ee57d64c17d0c13644799fc0bc866276dac487ae35f364d3fbeb1299dd" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.981830 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.989545 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.016306 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae7433f6-40cb-4caf-8356-10bb93645af5-config-volume\") pod \"ae7433f6-40cb-4caf-8356-10bb93645af5\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.016399 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nhdq\" (UniqueName: \"kubernetes.io/projected/ae7433f6-40cb-4caf-8356-10bb93645af5-kube-api-access-4nhdq\") pod \"ae7433f6-40cb-4caf-8356-10bb93645af5\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.017676 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae7433f6-40cb-4caf-8356-10bb93645af5-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae7433f6-40cb-4caf-8356-10bb93645af5" (UID: "ae7433f6-40cb-4caf-8356-10bb93645af5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.044247 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae7433f6-40cb-4caf-8356-10bb93645af5-kube-api-access-4nhdq" (OuterVolumeSpecName: "kube-api-access-4nhdq") pod "ae7433f6-40cb-4caf-8356-10bb93645af5" (UID: "ae7433f6-40cb-4caf-8356-10bb93645af5"). InnerVolumeSpecName "kube-api-access-4nhdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.118543 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae7433f6-40cb-4caf-8356-10bb93645af5-secret-volume\") pod \"ae7433f6-40cb-4caf-8356-10bb93645af5\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.119440 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kubelet-dir\") pod \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.119512 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kube-api-access\") pod \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.119613 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4435f03e-0012-4f98-87b6-7f7dc2e0fd6a" (UID: "4435f03e-0012-4f98-87b6-7f7dc2e0fd6a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.120117 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae7433f6-40cb-4caf-8356-10bb93645af5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.120167 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nhdq\" (UniqueName: \"kubernetes.io/projected/ae7433f6-40cb-4caf-8356-10bb93645af5-kube-api-access-4nhdq\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.120202 4804 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.125605 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae7433f6-40cb-4caf-8356-10bb93645af5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae7433f6-40cb-4caf-8356-10bb93645af5" (UID: "ae7433f6-40cb-4caf-8356-10bb93645af5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.138089 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4435f03e-0012-4f98-87b6-7f7dc2e0fd6a" (UID: "4435f03e-0012-4f98-87b6-7f7dc2e0fd6a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.185490 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:48 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:48 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:48 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.185597 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.222851 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.222913 4804 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae7433f6-40cb-4caf-8356-10bb93645af5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.873868 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a","Type":"ContainerDied","Data":"d129e38a5b690123ccc9fe380f07527c9a27b1434082899a97ca0ad67cfe6489"} Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.873967 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d129e38a5b690123ccc9fe380f07527c9a27b1434082899a97ca0ad67cfe6489" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.873930 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.876872 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" event={"ID":"ae7433f6-40cb-4caf-8356-10bb93645af5","Type":"ContainerDied","Data":"c2ace65eb04ab5ff8b961ebdb9574c3959291d26b7237bb5bd982c03d8d46b22"} Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.876938 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ace65eb04ab5ff8b961ebdb9574c3959291d26b7237bb5bd982c03d8d46b22" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.876995 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:49 crc kubenswrapper[4804]: I0128 11:24:49.178715 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:49 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:49 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:49 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:49 crc kubenswrapper[4804]: I0128 11:24:49.178772 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:49 crc kubenswrapper[4804]: I0128 11:24:49.781480 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bgqd8"] Jan 28 11:24:49 crc kubenswrapper[4804]: W0128 11:24:49.792256 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03844e8b_8d66_4cd7_aa19_51caa1407918.slice/crio-080922c3dda98e098fd8ff31dc3049d092f877c246095dc1f507e09adc60e50d WatchSource:0}: Error finding container 080922c3dda98e098fd8ff31dc3049d092f877c246095dc1f507e09adc60e50d: Status 404 returned error can't find the container with id 080922c3dda98e098fd8ff31dc3049d092f877c246095dc1f507e09adc60e50d Jan 28 11:24:49 crc kubenswrapper[4804]: I0128 11:24:49.883635 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" event={"ID":"03844e8b-8d66-4cd7-aa19-51caa1407918","Type":"ContainerStarted","Data":"080922c3dda98e098fd8ff31dc3049d092f877c246095dc1f507e09adc60e50d"} Jan 28 11:24:49 crc kubenswrapper[4804]: I0128 11:24:49.886598 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-4wpb6_46da2b10-cba3-46fa-a2f3-972499966fd3/cluster-samples-operator/0.log" Jan 28 11:24:49 crc kubenswrapper[4804]: I0128 11:24:49.886641 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" event={"ID":"46da2b10-cba3-46fa-a2f3-972499966fd3","Type":"ContainerStarted","Data":"e82503a1c0d24e0741c8abe761ce9daccd9f772e2da6578ac2d39a02c7bf1f9f"} Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.178647 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:50 crc kubenswrapper[4804]: [+]has-synced ok Jan 28 11:24:50 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:50 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.179142 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.388779 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4j56"] Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.389121 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerName="controller-manager" containerID="cri-o://fe1f9a02caf21153db1bd567b5a8dd3b8d2a57a944d9bcf293a808b20081940f" gracePeriod=30 Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.417142 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f"] Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.417360 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" podUID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" containerName="route-controller-manager" containerID="cri-o://e29698ab75e7e02a47e44e9099d1296207853f543f444cd2dc63a10873278dc8" gracePeriod=30 Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.894380 4804 generic.go:334] "Generic (PLEG): container finished" podID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" containerID="e29698ab75e7e02a47e44e9099d1296207853f543f444cd2dc63a10873278dc8" exitCode=0 Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.894420 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" event={"ID":"d56b6530-c7d7-432d-bd5e-1a07a2d94515","Type":"ContainerDied","Data":"e29698ab75e7e02a47e44e9099d1296207853f543f444cd2dc63a10873278dc8"} Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.896444 4804 generic.go:334] "Generic (PLEG): container finished" podID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerID="fe1f9a02caf21153db1bd567b5a8dd3b8d2a57a944d9bcf293a808b20081940f" exitCode=0 Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.896518 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" event={"ID":"c802cb06-d5ee-489c-aa2d-4dee5f3f2557","Type":"ContainerDied","Data":"fe1f9a02caf21153db1bd567b5a8dd3b8d2a57a944d9bcf293a808b20081940f"} Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.898695 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" event={"ID":"03844e8b-8d66-4cd7-aa19-51caa1407918","Type":"ContainerStarted","Data":"be5253521d5e6ad770d8cc9a163638f1cea5c7460a707c026d28b5f6c44e6418"} Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.182292 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.186230 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.408551 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479227 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5dc588d788-64sh7"] Jan 28 11:24:51 crc kubenswrapper[4804]: E0128 11:24:51.479580 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4435f03e-0012-4f98-87b6-7f7dc2e0fd6a" containerName="pruner" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479597 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4435f03e-0012-4f98-87b6-7f7dc2e0fd6a" containerName="pruner" Jan 28 11:24:51 crc kubenswrapper[4804]: E0128 11:24:51.479615 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae7433f6-40cb-4caf-8356-10bb93645af5" containerName="collect-profiles" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479623 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae7433f6-40cb-4caf-8356-10bb93645af5" containerName="collect-profiles" Jan 28 11:24:51 crc kubenswrapper[4804]: E0128 11:24:51.479634 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerName="controller-manager" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479642 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerName="controller-manager" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479744 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerName="controller-manager" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479759 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae7433f6-40cb-4caf-8356-10bb93645af5" containerName="collect-profiles" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479771 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4435f03e-0012-4f98-87b6-7f7dc2e0fd6a" containerName="pruner" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.480299 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.485019 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpcsj\" (UniqueName: \"kubernetes.io/projected/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-kube-api-access-dpcsj\") pod \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.485111 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-client-ca\") pod \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.485180 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-config\") pod \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.485267 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-serving-cert\") pod \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.485313 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-proxy-ca-bundles\") pod \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.486458 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-client-ca" (OuterVolumeSpecName: "client-ca") pod "c802cb06-d5ee-489c-aa2d-4dee5f3f2557" (UID: "c802cb06-d5ee-489c-aa2d-4dee5f3f2557"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.486471 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c802cb06-d5ee-489c-aa2d-4dee5f3f2557" (UID: "c802cb06-d5ee-489c-aa2d-4dee5f3f2557"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.486744 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-config" (OuterVolumeSpecName: "config") pod "c802cb06-d5ee-489c-aa2d-4dee5f3f2557" (UID: "c802cb06-d5ee-489c-aa2d-4dee5f3f2557"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.493326 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c802cb06-d5ee-489c-aa2d-4dee5f3f2557" (UID: "c802cb06-d5ee-489c-aa2d-4dee5f3f2557"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.493407 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-kube-api-access-dpcsj" (OuterVolumeSpecName: "kube-api-access-dpcsj") pod "c802cb06-d5ee-489c-aa2d-4dee5f3f2557" (UID: "c802cb06-d5ee-489c-aa2d-4dee5f3f2557"). InnerVolumeSpecName "kube-api-access-dpcsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.503073 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dc588d788-64sh7"] Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587041 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-client-ca\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587108 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-config\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587144 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmsd\" (UniqueName: \"kubernetes.io/projected/59930ea0-7a62-4dd0-a48d-0246b34a6be7-kube-api-access-kfmsd\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587176 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59930ea0-7a62-4dd0-a48d-0246b34a6be7-serving-cert\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587252 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-proxy-ca-bundles\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587394 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587412 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587426 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587442 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpcsj\" (UniqueName: \"kubernetes.io/projected/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-kube-api-access-dpcsj\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587455 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.689161 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-proxy-ca-bundles\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.689251 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-client-ca\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.689271 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-config\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.689293 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfmsd\" (UniqueName: \"kubernetes.io/projected/59930ea0-7a62-4dd0-a48d-0246b34a6be7-kube-api-access-kfmsd\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.689317 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59930ea0-7a62-4dd0-a48d-0246b34a6be7-serving-cert\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.690893 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-client-ca\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.691192 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-proxy-ca-bundles\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.695103 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59930ea0-7a62-4dd0-a48d-0246b34a6be7-serving-cert\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.747968 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-config\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.748494 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfmsd\" (UniqueName: \"kubernetes.io/projected/59930ea0-7a62-4dd0-a48d-0246b34a6be7-kube-api-access-kfmsd\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.786814 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.825480 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.890920 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-config\") pod \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.891437 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d56b6530-c7d7-432d-bd5e-1a07a2d94515-serving-cert\") pod \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.891470 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74dlk\" (UniqueName: \"kubernetes.io/projected/d56b6530-c7d7-432d-bd5e-1a07a2d94515-kube-api-access-74dlk\") pod \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.891509 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-client-ca\") pod \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.892289 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-config" (OuterVolumeSpecName: "config") pod "d56b6530-c7d7-432d-bd5e-1a07a2d94515" (UID: "d56b6530-c7d7-432d-bd5e-1a07a2d94515"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.892568 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-client-ca" (OuterVolumeSpecName: "client-ca") pod "d56b6530-c7d7-432d-bd5e-1a07a2d94515" (UID: "d56b6530-c7d7-432d-bd5e-1a07a2d94515"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.897622 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56b6530-c7d7-432d-bd5e-1a07a2d94515-kube-api-access-74dlk" (OuterVolumeSpecName: "kube-api-access-74dlk") pod "d56b6530-c7d7-432d-bd5e-1a07a2d94515" (UID: "d56b6530-c7d7-432d-bd5e-1a07a2d94515"). InnerVolumeSpecName "kube-api-access-74dlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.902229 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56b6530-c7d7-432d-bd5e-1a07a2d94515-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d56b6530-c7d7-432d-bd5e-1a07a2d94515" (UID: "d56b6530-c7d7-432d-bd5e-1a07a2d94515"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.914417 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" event={"ID":"03844e8b-8d66-4cd7-aa19-51caa1407918","Type":"ContainerStarted","Data":"4e1665d2643f9e7843913de938e6efabf84334825d7e735b2ad99d81bececd70"} Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.919233 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" event={"ID":"d56b6530-c7d7-432d-bd5e-1a07a2d94515","Type":"ContainerDied","Data":"5fd5c872cb044c160a4ff18aa2a4c6121bf64074f3146f4655562dbf6f1c2b4e"} Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.919306 4804 scope.go:117] "RemoveContainer" containerID="e29698ab75e7e02a47e44e9099d1296207853f543f444cd2dc63a10873278dc8" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.919470 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.924104 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" event={"ID":"c802cb06-d5ee-489c-aa2d-4dee5f3f2557","Type":"ContainerDied","Data":"cffe7deccba04a98fba8c431ccb78fb720efb5536fc80dba3180146f85a85987"} Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.924352 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.953515 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bgqd8" podStartSLOduration=146.953480712 podStartE2EDuration="2m26.953480712s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:51.941488129 +0000 UTC m=+167.736368123" watchObservedRunningTime="2026-01-28 11:24:51.953480712 +0000 UTC m=+167.748360716" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.957497 4804 scope.go:117] "RemoveContainer" containerID="fe1f9a02caf21153db1bd567b5a8dd3b8d2a57a944d9bcf293a808b20081940f" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.973968 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4j56"] Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.977324 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4j56"] Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.995715 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d56b6530-c7d7-432d-bd5e-1a07a2d94515-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.995753 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74dlk\" (UniqueName: \"kubernetes.io/projected/d56b6530-c7d7-432d-bd5e-1a07a2d94515-kube-api-access-74dlk\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.995763 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.995775 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.008577 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f"] Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.014335 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f"] Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.039932 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dc588d788-64sh7"] Jan 28 11:24:52 crc kubenswrapper[4804]: W0128 11:24:52.049816 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59930ea0_7a62_4dd0_a48d_0246b34a6be7.slice/crio-a60f1a1c976bce9f2b4c14b42cba1fdca7b0b73edf690555b2df9dd2467d95b6 WatchSource:0}: Error finding container a60f1a1c976bce9f2b4c14b42cba1fdca7b0b73edf690555b2df9dd2467d95b6: Status 404 returned error can't find the container with id a60f1a1c976bce9f2b4c14b42cba1fdca7b0b73edf690555b2df9dd2467d95b6 Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.289714 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk"] Jan 28 11:24:52 crc kubenswrapper[4804]: E0128 11:24:52.290221 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" containerName="route-controller-manager" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.290307 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" containerName="route-controller-manager" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.290504 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" containerName="route-controller-manager" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.291050 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.296777 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.297285 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.297623 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.300025 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.300594 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.301381 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.305547 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk"] Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.403332 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-config\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.403419 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-client-ca\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.403518 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-serving-cert\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.403595 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f7mv\" (UniqueName: \"kubernetes.io/projected/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-kube-api-access-7f7mv\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.505011 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-config\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.505315 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-client-ca\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.505425 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-serving-cert\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.505556 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f7mv\" (UniqueName: \"kubernetes.io/projected/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-kube-api-access-7f7mv\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.506837 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-client-ca\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.508433 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-config\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.518224 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-serving-cert\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.542502 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f7mv\" (UniqueName: \"kubernetes.io/projected/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-kube-api-access-7f7mv\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.611475 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.924814 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" path="/var/lib/kubelet/pods/c802cb06-d5ee-489c-aa2d-4dee5f3f2557/volumes" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.925562 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" path="/var/lib/kubelet/pods/d56b6530-c7d7-432d-bd5e-1a07a2d94515/volumes" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.936331 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" event={"ID":"59930ea0-7a62-4dd0-a48d-0246b34a6be7","Type":"ContainerStarted","Data":"06586f4a68116bfb55e6101b637d70140d785a6280b61324bd015de3dcd7bb58"} Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.936446 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" event={"ID":"59930ea0-7a62-4dd0-a48d-0246b34a6be7","Type":"ContainerStarted","Data":"a60f1a1c976bce9f2b4c14b42cba1fdca7b0b73edf690555b2df9dd2467d95b6"} Jan 28 11:24:53 crc kubenswrapper[4804]: I0128 11:24:53.110029 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk"] Jan 28 11:24:53 crc kubenswrapper[4804]: I0128 11:24:53.960206 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" event={"ID":"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da","Type":"ContainerStarted","Data":"9a6e2ae8d943be72371e7e8be26fa1c549f7f11bca18c47e56000f334ca6ec2d"} Jan 28 11:24:53 crc kubenswrapper[4804]: I0128 11:24:53.960257 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" event={"ID":"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da","Type":"ContainerStarted","Data":"619586aca8589d38b78a3357b12c57e55e945004febd99d5969acb6d2850fa1c"} Jan 28 11:24:53 crc kubenswrapper[4804]: I0128 11:24:53.981407 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" podStartSLOduration=3.981387577 podStartE2EDuration="3.981387577s" podCreationTimestamp="2026-01-28 11:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:53.980923572 +0000 UTC m=+169.775803556" watchObservedRunningTime="2026-01-28 11:24:53.981387577 +0000 UTC m=+169.776267561" Jan 28 11:24:54 crc kubenswrapper[4804]: I0128 11:24:54.966644 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:54 crc kubenswrapper[4804]: I0128 11:24:54.973776 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:54 crc kubenswrapper[4804]: I0128 11:24:54.992450 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" podStartSLOduration=4.992427555 podStartE2EDuration="4.992427555s" podCreationTimestamp="2026-01-28 11:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:54.989046674 +0000 UTC m=+170.783926658" watchObservedRunningTime="2026-01-28 11:24:54.992427555 +0000 UTC m=+170.787307539" Jan 28 11:24:55 crc kubenswrapper[4804]: I0128 11:24:55.717732 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:55 crc kubenswrapper[4804]: I0128 11:24:55.944649 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:55 crc kubenswrapper[4804]: I0128 11:24:55.949011 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:25:01 crc kubenswrapper[4804]: I0128 11:25:01.826566 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:25:01 crc kubenswrapper[4804]: I0128 11:25:01.833094 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:25:05 crc kubenswrapper[4804]: I0128 11:25:05.701069 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:25:08 crc kubenswrapper[4804]: E0128 11:25:08.860046 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 11:25:08 crc kubenswrapper[4804]: E0128 11:25:08.860634 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6sc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-48gg7_openshift-marketplace(23f32834-88e4-454d-81fe-6370a2bc8e0b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 11:25:08 crc kubenswrapper[4804]: E0128 11:25:08.862526 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-48gg7" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" Jan 28 11:25:10 crc kubenswrapper[4804]: I0128 11:25:10.328207 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dc588d788-64sh7"] Jan 28 11:25:10 crc kubenswrapper[4804]: I0128 11:25:10.328840 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" podUID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" containerName="controller-manager" containerID="cri-o://06586f4a68116bfb55e6101b637d70140d785a6280b61324bd015de3dcd7bb58" gracePeriod=30 Jan 28 11:25:10 crc kubenswrapper[4804]: I0128 11:25:10.435981 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk"] Jan 28 11:25:10 crc kubenswrapper[4804]: I0128 11:25:10.436193 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" podUID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" containerName="route-controller-manager" containerID="cri-o://9a6e2ae8d943be72371e7e8be26fa1c549f7f11bca18c47e56000f334ca6ec2d" gracePeriod=30 Jan 28 11:25:11 crc kubenswrapper[4804]: I0128 11:25:11.069418 4804 generic.go:334] "Generic (PLEG): container finished" podID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" containerID="9a6e2ae8d943be72371e7e8be26fa1c549f7f11bca18c47e56000f334ca6ec2d" exitCode=0 Jan 28 11:25:11 crc kubenswrapper[4804]: I0128 11:25:11.069554 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" event={"ID":"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da","Type":"ContainerDied","Data":"9a6e2ae8d943be72371e7e8be26fa1c549f7f11bca18c47e56000f334ca6ec2d"} Jan 28 11:25:11 crc kubenswrapper[4804]: I0128 11:25:11.073495 4804 generic.go:334] "Generic (PLEG): container finished" podID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" containerID="06586f4a68116bfb55e6101b637d70140d785a6280b61324bd015de3dcd7bb58" exitCode=0 Jan 28 11:25:11 crc kubenswrapper[4804]: I0128 11:25:11.074041 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" event={"ID":"59930ea0-7a62-4dd0-a48d-0246b34a6be7","Type":"ContainerDied","Data":"06586f4a68116bfb55e6101b637d70140d785a6280b61324bd015de3dcd7bb58"} Jan 28 11:25:11 crc kubenswrapper[4804]: I0128 11:25:11.827849 4804 patch_prober.go:28] interesting pod/controller-manager-5dc588d788-64sh7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 28 11:25:11 crc kubenswrapper[4804]: I0128 11:25:11.827996 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" podUID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 28 11:25:12 crc kubenswrapper[4804]: I0128 11:25:12.172284 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:25:12 crc kubenswrapper[4804]: I0128 11:25:12.582661 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:25:12 crc kubenswrapper[4804]: I0128 11:25:12.582744 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:25:12 crc kubenswrapper[4804]: I0128 11:25:12.612394 4804 patch_prober.go:28] interesting pod/route-controller-manager-564dc4567b-ss5tk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 28 11:25:12 crc kubenswrapper[4804]: I0128 11:25:12.612783 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" podUID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.750118 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-48gg7" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.860764 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.860929 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9n2zg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-nw6s2_openshift-marketplace(759bdf85-0cca-46db-8126-fab61a8664a8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.862054 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-nw6s2" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.863544 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.863662 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gn5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4842n_openshift-marketplace(ac859130-1b71-4993-ab3d-66600459a32a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.864795 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4842n" podUID="ac859130-1b71-4993-ab3d-66600459a32a" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.896107 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.896268 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dklnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jmw4q_openshift-marketplace(b641b655-0d3e-4838-8c87-fc72873f1944): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.896316 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.896383 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wms6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gw5tb_openshift-marketplace(8a0ef2f6-3113-478c-bb8c-9ea8e004a27d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.897417 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jmw4q" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.897523 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gw5tb" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.126601 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerStarted","Data":"ec196e8414d1104384ba418ed46e3931a8aa99482add9614aaedd0533c6a0b63"} Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.135997 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerStarted","Data":"97664e8d1984615a65d446a8bc46d2bb67c1945d32796a3d76d2a216b0e0b130"} Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.139500 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerStarted","Data":"d94669774e7242d8b7fe429cfa0919b0f629e2465d0eed385a4b1380750d4b02"} Jan 28 11:25:14 crc kubenswrapper[4804]: E0128 11:25:14.141357 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jmw4q" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" Jan 28 11:25:14 crc kubenswrapper[4804]: E0128 11:25:14.141462 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gw5tb" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" Jan 28 11:25:14 crc kubenswrapper[4804]: E0128 11:25:14.141680 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-nw6s2" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" Jan 28 11:25:14 crc kubenswrapper[4804]: E0128 11:25:14.142983 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4842n" podUID="ac859130-1b71-4993-ab3d-66600459a32a" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.192561 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.193727 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.201398 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.201561 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.219202 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.263391 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359375 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f7mv\" (UniqueName: \"kubernetes.io/projected/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-kube-api-access-7f7mv\") pod \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359536 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-serving-cert\") pod \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359567 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-config\") pod \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359636 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-client-ca\") pod \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359697 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg"] Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359861 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/915725ae-1097-4499-a143-bc1355edd31b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359935 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915725ae-1097-4499-a143-bc1355edd31b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: E0128 11:25:14.360120 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" containerName="route-controller-manager" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.360150 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" containerName="route-controller-manager" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.360300 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" containerName="route-controller-manager" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.360759 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.361633 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-config" (OuterVolumeSpecName: "config") pod "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" (UID: "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.361740 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-client-ca" (OuterVolumeSpecName: "client-ca") pod "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" (UID: "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.369280 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" (UID: "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.373178 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-kube-api-access-7f7mv" (OuterVolumeSpecName: "kube-api-access-7f7mv") pod "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" (UID: "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da"). InnerVolumeSpecName "kube-api-access-7f7mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.373430 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg"] Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.461171 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-client-ca\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.461849 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/915725ae-1097-4499-a143-bc1355edd31b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.461912 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-serving-cert\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.461971 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915725ae-1097-4499-a143-bc1355edd31b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.461995 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/915725ae-1097-4499-a143-bc1355edd31b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.462023 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vb8g\" (UniqueName: \"kubernetes.io/projected/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-kube-api-access-8vb8g\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.462241 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-config\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.462377 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.462392 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.462403 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.462415 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f7mv\" (UniqueName: \"kubernetes.io/projected/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-kube-api-access-7f7mv\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.479064 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915725ae-1097-4499-a143-bc1355edd31b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.549517 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.563192 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.563359 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-serving-cert\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.563459 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vb8g\" (UniqueName: \"kubernetes.io/projected/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-kube-api-access-8vb8g\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.563501 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-config\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.563543 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-client-ca\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.564779 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-config\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.565359 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-client-ca\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.570706 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-serving-cert\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.587408 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vb8g\" (UniqueName: \"kubernetes.io/projected/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-kube-api-access-8vb8g\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.664621 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-proxy-ca-bundles\") pod \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665101 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-config\") pod \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665166 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-client-ca\") pod \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665261 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59930ea0-7a62-4dd0-a48d-0246b34a6be7-serving-cert\") pod \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665288 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfmsd\" (UniqueName: \"kubernetes.io/projected/59930ea0-7a62-4dd0-a48d-0246b34a6be7-kube-api-access-kfmsd\") pod \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665745 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-client-ca" (OuterVolumeSpecName: "client-ca") pod "59930ea0-7a62-4dd0-a48d-0246b34a6be7" (UID: "59930ea0-7a62-4dd0-a48d-0246b34a6be7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665761 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "59930ea0-7a62-4dd0-a48d-0246b34a6be7" (UID: "59930ea0-7a62-4dd0-a48d-0246b34a6be7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665838 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-config" (OuterVolumeSpecName: "config") pod "59930ea0-7a62-4dd0-a48d-0246b34a6be7" (UID: "59930ea0-7a62-4dd0-a48d-0246b34a6be7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.666426 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.666455 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.666467 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.672046 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59930ea0-7a62-4dd0-a48d-0246b34a6be7-kube-api-access-kfmsd" (OuterVolumeSpecName: "kube-api-access-kfmsd") pod "59930ea0-7a62-4dd0-a48d-0246b34a6be7" (UID: "59930ea0-7a62-4dd0-a48d-0246b34a6be7"). InnerVolumeSpecName "kube-api-access-kfmsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.672182 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59930ea0-7a62-4dd0-a48d-0246b34a6be7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59930ea0-7a62-4dd0-a48d-0246b34a6be7" (UID: "59930ea0-7a62-4dd0-a48d-0246b34a6be7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.674487 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.746125 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.767973 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59930ea0-7a62-4dd0-a48d-0246b34a6be7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.768357 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfmsd\" (UniqueName: \"kubernetes.io/projected/59930ea0-7a62-4dd0-a48d-0246b34a6be7-kube-api-access-kfmsd\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.879253 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg"] Jan 28 11:25:14 crc kubenswrapper[4804]: W0128 11:25:14.891043 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75052041_d7ef_4a05_ac6d_fdbf2f8e2ab9.slice/crio-9aadef1b32585caba0ba9cf8ffb717fece86708962396590bf0f0d48c279556e WatchSource:0}: Error finding container 9aadef1b32585caba0ba9cf8ffb717fece86708962396590bf0f0d48c279556e: Status 404 returned error can't find the container with id 9aadef1b32585caba0ba9cf8ffb717fece86708962396590bf0f0d48c279556e Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.143182 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" event={"ID":"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9","Type":"ContainerStarted","Data":"0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.143230 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" event={"ID":"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9","Type":"ContainerStarted","Data":"9aadef1b32585caba0ba9cf8ffb717fece86708962396590bf0f0d48c279556e"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.143403 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.150612 4804 generic.go:334] "Generic (PLEG): container finished" podID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerID="d94669774e7242d8b7fe429cfa0919b0f629e2465d0eed385a4b1380750d4b02" exitCode=0 Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.150678 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerDied","Data":"d94669774e7242d8b7fe429cfa0919b0f629e2465d0eed385a4b1380750d4b02"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.168582 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" event={"ID":"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da","Type":"ContainerDied","Data":"619586aca8589d38b78a3357b12c57e55e945004febd99d5969acb6d2850fa1c"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.168656 4804 scope.go:117] "RemoveContainer" containerID="9a6e2ae8d943be72371e7e8be26fa1c549f7f11bca18c47e56000f334ca6ec2d" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.168825 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.178833 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"915725ae-1097-4499-a143-bc1355edd31b","Type":"ContainerStarted","Data":"0772f280b6e0f7805b47c8bf2aacad07b11de9064aaab7c0ccf82cfc5b95b407"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.178906 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"915725ae-1097-4499-a143-bc1355edd31b","Type":"ContainerStarted","Data":"5c0b0319c179b4a20958089ee62da80f519137ecc33dda002ad8302642312986"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.179233 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" podStartSLOduration=5.179184755 podStartE2EDuration="5.179184755s" podCreationTimestamp="2026-01-28 11:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:15.169302962 +0000 UTC m=+190.964182956" watchObservedRunningTime="2026-01-28 11:25:15.179184755 +0000 UTC m=+190.974064739" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.185432 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" event={"ID":"59930ea0-7a62-4dd0-a48d-0246b34a6be7","Type":"ContainerDied","Data":"a60f1a1c976bce9f2b4c14b42cba1fdca7b0b73edf690555b2df9dd2467d95b6"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.185758 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.187370 4804 generic.go:334] "Generic (PLEG): container finished" podID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerID="ec196e8414d1104384ba418ed46e3931a8aa99482add9614aaedd0533c6a0b63" exitCode=0 Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.187479 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerDied","Data":"ec196e8414d1104384ba418ed46e3931a8aa99482add9614aaedd0533c6a0b63"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.195112 4804 generic.go:334] "Generic (PLEG): container finished" podID="4ad471e3-4346-4464-94bf-778299801fe4" containerID="97664e8d1984615a65d446a8bc46d2bb67c1945d32796a3d76d2a216b0e0b130" exitCode=0 Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.195163 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerDied","Data":"97664e8d1984615a65d446a8bc46d2bb67c1945d32796a3d76d2a216b0e0b130"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.207374 4804 scope.go:117] "RemoveContainer" containerID="06586f4a68116bfb55e6101b637d70140d785a6280b61324bd015de3dcd7bb58" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.233062 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk"] Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.235959 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk"] Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.264384 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.264361907 podStartE2EDuration="1.264361907s" podCreationTimestamp="2026-01-28 11:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:15.242383087 +0000 UTC m=+191.037263071" watchObservedRunningTime="2026-01-28 11:25:15.264361907 +0000 UTC m=+191.059241891" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.278109 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dc588d788-64sh7"] Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.280764 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5dc588d788-64sh7"] Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.566965 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.206253 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerStarted","Data":"42f2eeb16ac98652bf013b7ae171fa09175e007ba579b10ded8267ad8190a2a1"} Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.208874 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerStarted","Data":"c68ce6087a718ae99e3cc4463f7573abafe2ce84992961e303b7908a2d114381"} Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.214604 4804 generic.go:334] "Generic (PLEG): container finished" podID="915725ae-1097-4499-a143-bc1355edd31b" containerID="0772f280b6e0f7805b47c8bf2aacad07b11de9064aaab7c0ccf82cfc5b95b407" exitCode=0 Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.214705 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"915725ae-1097-4499-a143-bc1355edd31b","Type":"ContainerDied","Data":"0772f280b6e0f7805b47c8bf2aacad07b11de9064aaab7c0ccf82cfc5b95b407"} Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.229508 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hzmvb" podStartSLOduration=4.018623509 podStartE2EDuration="44.22948729s" podCreationTimestamp="2026-01-28 11:24:32 +0000 UTC" firstStartedPulling="2026-01-28 11:24:35.429120488 +0000 UTC m=+151.224000472" lastFinishedPulling="2026-01-28 11:25:15.639984279 +0000 UTC m=+191.434864253" observedRunningTime="2026-01-28 11:25:16.226633666 +0000 UTC m=+192.021513660" watchObservedRunningTime="2026-01-28 11:25:16.22948729 +0000 UTC m=+192.024367274" Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.261239 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kvdtx" podStartSLOduration=4.033821828 podStartE2EDuration="44.26121622s" podCreationTimestamp="2026-01-28 11:24:32 +0000 UTC" firstStartedPulling="2026-01-28 11:24:35.428787307 +0000 UTC m=+151.223667291" lastFinishedPulling="2026-01-28 11:25:15.656181699 +0000 UTC m=+191.451061683" observedRunningTime="2026-01-28 11:25:16.258197961 +0000 UTC m=+192.053077945" watchObservedRunningTime="2026-01-28 11:25:16.26121622 +0000 UTC m=+192.056096204" Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.930272 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" path="/var/lib/kubelet/pods/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da/volumes" Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.931415 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" path="/var/lib/kubelet/pods/59930ea0-7a62-4dd0-a48d-0246b34a6be7/volumes" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.226066 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerStarted","Data":"3fd567f1f3948b02442e51054aa407e5b7de7526347804594426ab16143ecc19"} Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.243817 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9b7c6" podStartSLOduration=3.579145715 podStartE2EDuration="43.243795514s" podCreationTimestamp="2026-01-28 11:24:34 +0000 UTC" firstStartedPulling="2026-01-28 11:24:36.468722011 +0000 UTC m=+152.263601985" lastFinishedPulling="2026-01-28 11:25:16.1333718 +0000 UTC m=+191.928251784" observedRunningTime="2026-01-28 11:25:17.241005633 +0000 UTC m=+193.035885617" watchObservedRunningTime="2026-01-28 11:25:17.243795514 +0000 UTC m=+193.038675498" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.309904 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74695d59-ptx55"] Jan 28 11:25:17 crc kubenswrapper[4804]: E0128 11:25:17.310562 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" containerName="controller-manager" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.310581 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" containerName="controller-manager" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.310714 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" containerName="controller-manager" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.312558 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.317137 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.318823 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.319052 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.319228 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.319412 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.319662 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.321389 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74695d59-ptx55"] Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.328144 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.415263 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-serving-cert\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.415311 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpg9v\" (UniqueName: \"kubernetes.io/projected/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-kube-api-access-jpg9v\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.415339 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-client-ca\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.415395 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-config\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.415410 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-proxy-ca-bundles\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.516347 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-serving-cert\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.516391 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpg9v\" (UniqueName: \"kubernetes.io/projected/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-kube-api-access-jpg9v\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.516419 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-client-ca\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.516475 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-config\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.516491 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-proxy-ca-bundles\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.518303 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-client-ca\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.518875 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-config\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.519734 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-proxy-ca-bundles\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.524339 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-serving-cert\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.534425 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpg9v\" (UniqueName: \"kubernetes.io/projected/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-kube-api-access-jpg9v\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.562711 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.630697 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.718370 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/915725ae-1097-4499-a143-bc1355edd31b-kubelet-dir\") pod \"915725ae-1097-4499-a143-bc1355edd31b\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.718471 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/915725ae-1097-4499-a143-bc1355edd31b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "915725ae-1097-4499-a143-bc1355edd31b" (UID: "915725ae-1097-4499-a143-bc1355edd31b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.718771 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915725ae-1097-4499-a143-bc1355edd31b-kube-api-access\") pod \"915725ae-1097-4499-a143-bc1355edd31b\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.719161 4804 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/915725ae-1097-4499-a143-bc1355edd31b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.771960 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915725ae-1097-4499-a143-bc1355edd31b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "915725ae-1097-4499-a143-bc1355edd31b" (UID: "915725ae-1097-4499-a143-bc1355edd31b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.820756 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915725ae-1097-4499-a143-bc1355edd31b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:18 crc kubenswrapper[4804]: I0128 11:25:18.117741 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74695d59-ptx55"] Jan 28 11:25:18 crc kubenswrapper[4804]: I0128 11:25:18.243616 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:18 crc kubenswrapper[4804]: I0128 11:25:18.243653 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"915725ae-1097-4499-a143-bc1355edd31b","Type":"ContainerDied","Data":"5c0b0319c179b4a20958089ee62da80f519137ecc33dda002ad8302642312986"} Jan 28 11:25:18 crc kubenswrapper[4804]: I0128 11:25:18.243706 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c0b0319c179b4a20958089ee62da80f519137ecc33dda002ad8302642312986" Jan 28 11:25:18 crc kubenswrapper[4804]: I0128 11:25:18.245631 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" event={"ID":"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea","Type":"ContainerStarted","Data":"9fd73d399deb0cba0c34dd2f8c6fce22c13b39693edc3b28ad57bd6a22ed7ddb"} Jan 28 11:25:19 crc kubenswrapper[4804]: I0128 11:25:19.161966 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-44lsd"] Jan 28 11:25:19 crc kubenswrapper[4804]: I0128 11:25:19.263938 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" event={"ID":"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea","Type":"ContainerStarted","Data":"89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635"} Jan 28 11:25:19 crc kubenswrapper[4804]: I0128 11:25:19.264719 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:19 crc kubenswrapper[4804]: I0128 11:25:19.273247 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:19 crc kubenswrapper[4804]: I0128 11:25:19.339607 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" podStartSLOduration=9.339591155 podStartE2EDuration="9.339591155s" podCreationTimestamp="2026-01-28 11:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:19.297050721 +0000 UTC m=+195.091930705" watchObservedRunningTime="2026-01-28 11:25:19.339591155 +0000 UTC m=+195.134471139" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.182865 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 11:25:21 crc kubenswrapper[4804]: E0128 11:25:21.183467 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915725ae-1097-4499-a143-bc1355edd31b" containerName="pruner" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.183483 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="915725ae-1097-4499-a143-bc1355edd31b" containerName="pruner" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.183571 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="915725ae-1097-4499-a143-bc1355edd31b" containerName="pruner" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.183948 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.189328 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.189641 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.200464 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.266078 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.266131 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-var-lock\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.266198 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b357b6a6-77f2-483a-8689-9ec35a8d3008-kube-api-access\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.367910 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.367994 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-var-lock\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.368019 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.368059 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b357b6a6-77f2-483a-8689-9ec35a8d3008-kube-api-access\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.368078 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-var-lock\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.386546 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b357b6a6-77f2-483a-8689-9ec35a8d3008-kube-api-access\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.530427 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.912143 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 11:25:22 crc kubenswrapper[4804]: I0128 11:25:22.281494 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b357b6a6-77f2-483a-8689-9ec35a8d3008","Type":"ContainerStarted","Data":"1f7f9ceafdf7d00d9bfd7448074f1a52a2999efacee1059cdf48132d46ccbaba"} Jan 28 11:25:22 crc kubenswrapper[4804]: I0128 11:25:22.780958 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:25:22 crc kubenswrapper[4804]: I0128 11:25:22.781341 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.061185 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.206874 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.206954 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.248218 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.290409 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b357b6a6-77f2-483a-8689-9ec35a8d3008","Type":"ContainerStarted","Data":"f527c2fa450cb1d21059874ecde9cc59de23295afb4043919e5157ab805c5185"} Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.318532 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.318501835 podStartE2EDuration="2.318501835s" podCreationTimestamp="2026-01-28 11:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:23.306720104 +0000 UTC m=+199.101600088" watchObservedRunningTime="2026-01-28 11:25:23.318501835 +0000 UTC m=+199.113381819" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.328364 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.339601 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:25:24 crc kubenswrapper[4804]: I0128 11:25:24.361715 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kvdtx"] Jan 28 11:25:24 crc kubenswrapper[4804]: I0128 11:25:24.890342 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:25:24 crc kubenswrapper[4804]: I0128 11:25:24.890402 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:25:24 crc kubenswrapper[4804]: I0128 11:25:24.931199 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:25:25 crc kubenswrapper[4804]: I0128 11:25:25.300803 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kvdtx" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="registry-server" containerID="cri-o://c68ce6087a718ae99e3cc4463f7573abafe2ce84992961e303b7908a2d114381" gracePeriod=2 Jan 28 11:25:25 crc kubenswrapper[4804]: I0128 11:25:25.343058 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:25:26 crc kubenswrapper[4804]: I0128 11:25:26.307693 4804 generic.go:334] "Generic (PLEG): container finished" podID="4ad471e3-4346-4464-94bf-778299801fe4" containerID="c68ce6087a718ae99e3cc4463f7573abafe2ce84992961e303b7908a2d114381" exitCode=0 Jan 28 11:25:26 crc kubenswrapper[4804]: I0128 11:25:26.307834 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerDied","Data":"c68ce6087a718ae99e3cc4463f7573abafe2ce84992961e303b7908a2d114381"} Jan 28 11:25:26 crc kubenswrapper[4804]: I0128 11:25:26.967036 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.074330 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-catalog-content\") pod \"4ad471e3-4346-4464-94bf-778299801fe4\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.074494 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wwt7\" (UniqueName: \"kubernetes.io/projected/4ad471e3-4346-4464-94bf-778299801fe4-kube-api-access-9wwt7\") pod \"4ad471e3-4346-4464-94bf-778299801fe4\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.074564 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-utilities\") pod \"4ad471e3-4346-4464-94bf-778299801fe4\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.075375 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-utilities" (OuterVolumeSpecName: "utilities") pod "4ad471e3-4346-4464-94bf-778299801fe4" (UID: "4ad471e3-4346-4464-94bf-778299801fe4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.083078 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad471e3-4346-4464-94bf-778299801fe4-kube-api-access-9wwt7" (OuterVolumeSpecName: "kube-api-access-9wwt7") pod "4ad471e3-4346-4464-94bf-778299801fe4" (UID: "4ad471e3-4346-4464-94bf-778299801fe4"). InnerVolumeSpecName "kube-api-access-9wwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.135070 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ad471e3-4346-4464-94bf-778299801fe4" (UID: "4ad471e3-4346-4464-94bf-778299801fe4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.176317 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wwt7\" (UniqueName: \"kubernetes.io/projected/4ad471e3-4346-4464-94bf-778299801fe4-kube-api-access-9wwt7\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.176350 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.176359 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.317327 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerDied","Data":"f2d704f75cce250d039d0dd04e24016c6014cdedb092df3fd7df1955f57ab50a"} Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.317371 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.317403 4804 scope.go:117] "RemoveContainer" containerID="c68ce6087a718ae99e3cc4463f7573abafe2ce84992961e303b7908a2d114381" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.339397 4804 scope.go:117] "RemoveContainer" containerID="97664e8d1984615a65d446a8bc46d2bb67c1945d32796a3d76d2a216b0e0b130" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.356142 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kvdtx"] Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.360309 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kvdtx"] Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.361027 4804 scope.go:117] "RemoveContainer" containerID="46ebddf77e338edea495290c557790d95f2de2df53a4e7134b3e39d453fa17af" Jan 28 11:25:28 crc kubenswrapper[4804]: I0128 11:25:28.325186 4804 generic.go:334] "Generic (PLEG): container finished" podID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerID="f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129" exitCode=0 Jan 28 11:25:28 crc kubenswrapper[4804]: I0128 11:25:28.325267 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw5tb" event={"ID":"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d","Type":"ContainerDied","Data":"f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129"} Jan 28 11:25:28 crc kubenswrapper[4804]: I0128 11:25:28.329317 4804 generic.go:334] "Generic (PLEG): container finished" podID="759bdf85-0cca-46db-8126-fab61a8664a8" containerID="6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c" exitCode=0 Jan 28 11:25:28 crc kubenswrapper[4804]: I0128 11:25:28.329371 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6s2" event={"ID":"759bdf85-0cca-46db-8126-fab61a8664a8","Type":"ContainerDied","Data":"6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c"} Jan 28 11:25:28 crc kubenswrapper[4804]: I0128 11:25:28.336867 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerStarted","Data":"7725654f9e2f3db24252d95301f4512ca56872a844c3f809462e7438542a69f4"} Jan 28 11:25:28 crc kubenswrapper[4804]: I0128 11:25:28.920860 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad471e3-4346-4464-94bf-778299801fe4" path="/var/lib/kubelet/pods/4ad471e3-4346-4464-94bf-778299801fe4/volumes" Jan 28 11:25:29 crc kubenswrapper[4804]: I0128 11:25:29.343854 4804 generic.go:334] "Generic (PLEG): container finished" podID="b641b655-0d3e-4838-8c87-fc72873f1944" containerID="7725654f9e2f3db24252d95301f4512ca56872a844c3f809462e7438542a69f4" exitCode=0 Jan 28 11:25:29 crc kubenswrapper[4804]: I0128 11:25:29.343920 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerDied","Data":"7725654f9e2f3db24252d95301f4512ca56872a844c3f809462e7438542a69f4"} Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.301843 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74695d59-ptx55"] Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.302529 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" podUID="312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" containerName="controller-manager" containerID="cri-o://89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635" gracePeriod=30 Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.318348 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg"] Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.318557 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" podUID="75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" containerName="route-controller-manager" containerID="cri-o://0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b" gracePeriod=30 Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.352653 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerStarted","Data":"631aa94b77086e59cc4974535410f633d8a570238e4eb3b9012dadf08b6ae781"} Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.355225 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6s2" event={"ID":"759bdf85-0cca-46db-8126-fab61a8664a8","Type":"ContainerStarted","Data":"bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c"} Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.357597 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerStarted","Data":"85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc"} Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.392995 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jmw4q" podStartSLOduration=2.060841935 podStartE2EDuration="55.392974702s" podCreationTimestamp="2026-01-28 11:24:35 +0000 UTC" firstStartedPulling="2026-01-28 11:24:36.492452489 +0000 UTC m=+152.287332473" lastFinishedPulling="2026-01-28 11:25:29.824585256 +0000 UTC m=+205.619465240" observedRunningTime="2026-01-28 11:25:30.387734659 +0000 UTC m=+206.182614653" watchObservedRunningTime="2026-01-28 11:25:30.392974702 +0000 UTC m=+206.187854686" Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.424956 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nw6s2" podStartSLOduration=3.031664682 podStartE2EDuration="55.424940223s" podCreationTimestamp="2026-01-28 11:24:35 +0000 UTC" firstStartedPulling="2026-01-28 11:24:37.646240515 +0000 UTC m=+153.441120499" lastFinishedPulling="2026-01-28 11:25:30.039516056 +0000 UTC m=+205.834396040" observedRunningTime="2026-01-28 11:25:30.423294718 +0000 UTC m=+206.218174702" watchObservedRunningTime="2026-01-28 11:25:30.424940223 +0000 UTC m=+206.219820207" Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.970806 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.005158 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.140780 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vb8g\" (UniqueName: \"kubernetes.io/projected/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-kube-api-access-8vb8g\") pod \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.140834 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-serving-cert\") pod \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.140863 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-client-ca\") pod \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.140937 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-serving-cert\") pod \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.140971 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpg9v\" (UniqueName: \"kubernetes.io/projected/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-kube-api-access-jpg9v\") pod \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.141011 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-config\") pod \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.141041 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-config\") pod \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.141088 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-client-ca\") pod \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.141123 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-proxy-ca-bundles\") pod \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.142335 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" (UID: "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.142784 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-config" (OuterVolumeSpecName: "config") pod "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" (UID: "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.143053 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-client-ca" (OuterVolumeSpecName: "client-ca") pod "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" (UID: "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.143182 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-config" (OuterVolumeSpecName: "config") pod "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" (UID: "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.143638 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-client-ca" (OuterVolumeSpecName: "client-ca") pod "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" (UID: "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.148216 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-kube-api-access-jpg9v" (OuterVolumeSpecName: "kube-api-access-jpg9v") pod "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" (UID: "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea"). InnerVolumeSpecName "kube-api-access-jpg9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.164547 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" (UID: "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.164801 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" (UID: "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.167244 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-kube-api-access-8vb8g" (OuterVolumeSpecName: "kube-api-access-8vb8g") pod "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" (UID: "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9"). InnerVolumeSpecName "kube-api-access-8vb8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243032 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vb8g\" (UniqueName: \"kubernetes.io/projected/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-kube-api-access-8vb8g\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243300 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243380 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243453 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243527 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpg9v\" (UniqueName: \"kubernetes.io/projected/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-kube-api-access-jpg9v\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243586 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243809 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243930 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.244053 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.366371 4804 generic.go:334] "Generic (PLEG): container finished" podID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerID="85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc" exitCode=0 Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.366449 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerDied","Data":"85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.373125 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw5tb" event={"ID":"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d","Type":"ContainerStarted","Data":"fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.377476 4804 generic.go:334] "Generic (PLEG): container finished" podID="312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" containerID="89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635" exitCode=0 Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.377591 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.377776 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" event={"ID":"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea","Type":"ContainerDied","Data":"89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.377901 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" event={"ID":"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea","Type":"ContainerDied","Data":"9fd73d399deb0cba0c34dd2f8c6fce22c13b39693edc3b28ad57bd6a22ed7ddb"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.377994 4804 scope.go:117] "RemoveContainer" containerID="89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.381989 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerStarted","Data":"f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.385244 4804 generic.go:334] "Generic (PLEG): container finished" podID="75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" containerID="0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b" exitCode=0 Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.385433 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" event={"ID":"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9","Type":"ContainerDied","Data":"0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.385517 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" event={"ID":"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9","Type":"ContainerDied","Data":"9aadef1b32585caba0ba9cf8ffb717fece86708962396590bf0f0d48c279556e"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.385587 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.406786 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gw5tb" podStartSLOduration=4.380302664 podStartE2EDuration="59.406769616s" podCreationTimestamp="2026-01-28 11:24:32 +0000 UTC" firstStartedPulling="2026-01-28 11:24:35.429966486 +0000 UTC m=+151.224846470" lastFinishedPulling="2026-01-28 11:25:30.456433438 +0000 UTC m=+206.251313422" observedRunningTime="2026-01-28 11:25:31.406391703 +0000 UTC m=+207.201271687" watchObservedRunningTime="2026-01-28 11:25:31.406769616 +0000 UTC m=+207.201649600" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.429840 4804 scope.go:117] "RemoveContainer" containerID="89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635" Jan 28 11:25:31 crc kubenswrapper[4804]: E0128 11:25:31.430532 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635\": container with ID starting with 89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635 not found: ID does not exist" containerID="89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.430575 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635"} err="failed to get container status \"89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635\": rpc error: code = NotFound desc = could not find container \"89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635\": container with ID starting with 89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635 not found: ID does not exist" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.430623 4804 scope.go:117] "RemoveContainer" containerID="0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.441975 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74695d59-ptx55"] Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.450599 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74695d59-ptx55"] Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.455950 4804 scope.go:117] "RemoveContainer" containerID="0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b" Jan 28 11:25:31 crc kubenswrapper[4804]: E0128 11:25:31.456414 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b\": container with ID starting with 0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b not found: ID does not exist" containerID="0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.456454 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b"} err="failed to get container status \"0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b\": rpc error: code = NotFound desc = could not find container \"0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b\": container with ID starting with 0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b not found: ID does not exist" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.460962 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg"] Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.464587 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg"] Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317210 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56fdbb7f67-z2wch"] Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.317470 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="extract-utilities" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317488 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="extract-utilities" Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.317501 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="registry-server" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317509 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="registry-server" Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.317519 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" containerName="controller-manager" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317528 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" containerName="controller-manager" Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.317543 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="extract-content" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317551 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="extract-content" Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.317570 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" containerName="route-controller-manager" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317581 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" containerName="route-controller-manager" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317711 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="registry-server" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317730 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" containerName="controller-manager" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317744 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" containerName="route-controller-manager" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.318216 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.321778 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl"] Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.321915 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.321968 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.322197 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.322244 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.322367 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.322459 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.322525 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: W0128 11:25:32.326393 4804 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.326448 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.326634 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 11:25:32 crc kubenswrapper[4804]: W0128 11:25:32.326668 4804 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.326696 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.326813 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.326966 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.328811 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.332075 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.334278 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56fdbb7f67-z2wch"] Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.355180 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl"] Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.391459 4804 generic.go:334] "Generic (PLEG): container finished" podID="ac859130-1b71-4993-ab3d-66600459a32a" containerID="f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2" exitCode=0 Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.391507 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerDied","Data":"f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2"} Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460365 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-proxy-ca-bundles\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460471 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-client-ca\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460521 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-serving-cert\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460559 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhhvx\" (UniqueName: \"kubernetes.io/projected/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-kube-api-access-jhhvx\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460589 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-client-ca\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460613 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-config\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460630 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460659 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrp5k\" (UniqueName: \"kubernetes.io/projected/779944ca-d8be-40c0-89ac-1e1b3208eed2-kube-api-access-nrp5k\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460697 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779944ca-d8be-40c0-89ac-1e1b3208eed2-serving-cert\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.551086 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.551149 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562340 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-client-ca\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562385 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-config\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562411 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562441 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrp5k\" (UniqueName: \"kubernetes.io/projected/779944ca-d8be-40c0-89ac-1e1b3208eed2-kube-api-access-nrp5k\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562485 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779944ca-d8be-40c0-89ac-1e1b3208eed2-serving-cert\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562512 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-proxy-ca-bundles\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562538 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-client-ca\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562566 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-serving-cert\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562601 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhhvx\" (UniqueName: \"kubernetes.io/projected/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-kube-api-access-jhhvx\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.563583 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-client-ca\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.563583 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-client-ca\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.563817 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-proxy-ca-bundles\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.564088 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-config\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.567341 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-serving-cert\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.567542 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779944ca-d8be-40c0-89ac-1e1b3208eed2-serving-cert\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.577714 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhhvx\" (UniqueName: \"kubernetes.io/projected/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-kube-api-access-jhhvx\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.579554 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrp5k\" (UniqueName: \"kubernetes.io/projected/779944ca-d8be-40c0-89ac-1e1b3208eed2-kube-api-access-nrp5k\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.594649 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.640431 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.922338 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" path="/var/lib/kubelet/pods/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea/volumes" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.923024 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" path="/var/lib/kubelet/pods/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9/volumes" Jan 28 11:25:33 crc kubenswrapper[4804]: I0128 11:25:33.066025 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56fdbb7f67-z2wch"] Jan 28 11:25:33 crc kubenswrapper[4804]: I0128 11:25:33.334014 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 11:25:33 crc kubenswrapper[4804]: I0128 11:25:33.398415 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" event={"ID":"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9","Type":"ContainerStarted","Data":"ea883bc2d51aafd97d5cd59b8bd8970b0e6abb434f296bbe9aa56e76957157f5"} Jan 28 11:25:33 crc kubenswrapper[4804]: I0128 11:25:33.400356 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerStarted","Data":"67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c"} Jan 28 11:25:33 crc kubenswrapper[4804]: E0128 11:25:33.564286 4804 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:25:33 crc kubenswrapper[4804]: E0128 11:25:33.564394 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config podName:779944ca-d8be-40c0-89ac-1e1b3208eed2 nodeName:}" failed. No retries permitted until 2026-01-28 11:25:34.064372264 +0000 UTC m=+209.859252248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config") pod "route-controller-manager-7cbb595b88-w8rrl" (UID: "779944ca-d8be-40c0-89ac-1e1b3208eed2") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:25:33 crc kubenswrapper[4804]: I0128 11:25:33.637238 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 11:25:34 crc kubenswrapper[4804]: I0128 11:25:34.080967 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:34 crc kubenswrapper[4804]: I0128 11:25:34.082143 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:34 crc kubenswrapper[4804]: I0128 11:25:34.146602 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:34 crc kubenswrapper[4804]: I0128 11:25:34.425963 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-48gg7" podStartSLOduration=5.917432278 podStartE2EDuration="1m2.425948338s" podCreationTimestamp="2026-01-28 11:24:32 +0000 UTC" firstStartedPulling="2026-01-28 11:24:35.45541721 +0000 UTC m=+151.250297194" lastFinishedPulling="2026-01-28 11:25:31.96393327 +0000 UTC m=+207.758813254" observedRunningTime="2026-01-28 11:25:34.422295286 +0000 UTC m=+210.217175270" watchObservedRunningTime="2026-01-28 11:25:34.425948338 +0000 UTC m=+210.220828322" Jan 28 11:25:35 crc kubenswrapper[4804]: I0128 11:25:35.700899 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:25:35 crc kubenswrapper[4804]: I0128 11:25:35.700975 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:25:35 crc kubenswrapper[4804]: I0128 11:25:35.930120 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:25:35 crc kubenswrapper[4804]: I0128 11:25:35.930169 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:25:36 crc kubenswrapper[4804]: I0128 11:25:36.524503 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl"] Jan 28 11:25:36 crc kubenswrapper[4804]: W0128 11:25:36.536480 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod779944ca_d8be_40c0_89ac_1e1b3208eed2.slice/crio-293582aed8c33e749713ca1eaf41ccd4f918856799864bbaa15b9045968b7da1 WatchSource:0}: Error finding container 293582aed8c33e749713ca1eaf41ccd4f918856799864bbaa15b9045968b7da1: Status 404 returned error can't find the container with id 293582aed8c33e749713ca1eaf41ccd4f918856799864bbaa15b9045968b7da1 Jan 28 11:25:36 crc kubenswrapper[4804]: I0128 11:25:36.739449 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jmw4q" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="registry-server" probeResult="failure" output=< Jan 28 11:25:36 crc kubenswrapper[4804]: timeout: failed to connect service ":50051" within 1s Jan 28 11:25:36 crc kubenswrapper[4804]: > Jan 28 11:25:36 crc kubenswrapper[4804]: I0128 11:25:36.974108 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nw6s2" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="registry-server" probeResult="failure" output=< Jan 28 11:25:36 crc kubenswrapper[4804]: timeout: failed to connect service ":50051" within 1s Jan 28 11:25:36 crc kubenswrapper[4804]: > Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.422253 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" event={"ID":"779944ca-d8be-40c0-89ac-1e1b3208eed2","Type":"ContainerStarted","Data":"9fdf0cdcfdf78c20f986e02639f79a0a492e63a0752874cb68d5057b00ae4186"} Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.422304 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" event={"ID":"779944ca-d8be-40c0-89ac-1e1b3208eed2","Type":"ContainerStarted","Data":"293582aed8c33e749713ca1eaf41ccd4f918856799864bbaa15b9045968b7da1"} Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.422522 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.424464 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerStarted","Data":"9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104"} Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.425508 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" event={"ID":"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9","Type":"ContainerStarted","Data":"dcf37b6c911b3e514bb3b3ee27eb41bb185440fd1374b3d272b1617c0225492f"} Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.425905 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.427575 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.430468 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.444721 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" podStartSLOduration=7.444705655 podStartE2EDuration="7.444705655s" podCreationTimestamp="2026-01-28 11:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:37.444111486 +0000 UTC m=+213.238991480" watchObservedRunningTime="2026-01-28 11:25:37.444705655 +0000 UTC m=+213.239585639" Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.548442 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" podStartSLOduration=7.5484255860000005 podStartE2EDuration="7.548425586s" podCreationTimestamp="2026-01-28 11:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:37.525699382 +0000 UTC m=+213.320579366" watchObservedRunningTime="2026-01-28 11:25:37.548425586 +0000 UTC m=+213.343305570" Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.548631 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4842n" podStartSLOduration=4.000733532 podStartE2EDuration="1m3.548627063s" podCreationTimestamp="2026-01-28 11:24:34 +0000 UTC" firstStartedPulling="2026-01-28 11:24:36.530874619 +0000 UTC m=+152.325754603" lastFinishedPulling="2026-01-28 11:25:36.07876815 +0000 UTC m=+211.873648134" observedRunningTime="2026-01-28 11:25:37.545245911 +0000 UTC m=+213.340125895" watchObservedRunningTime="2026-01-28 11:25:37.548627063 +0000 UTC m=+213.343507047" Jan 28 11:25:42 crc kubenswrapper[4804]: I0128 11:25:42.581813 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:25:42 crc kubenswrapper[4804]: I0128 11:25:42.582060 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:25:42 crc kubenswrapper[4804]: I0128 11:25:42.582100 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:25:42 crc kubenswrapper[4804]: I0128 11:25:42.582734 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:25:42 crc kubenswrapper[4804]: I0128 11:25:42.582789 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5" gracePeriod=600 Jan 28 11:25:42 crc kubenswrapper[4804]: I0128 11:25:42.592810 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.056807 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.056852 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.104151 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.472284 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5" exitCode=0 Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.472432 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5"} Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.473121 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"d6bd6423ac842a17ff5659b7f0672fd055e5689dc54e8deaa66167b5157cd76e"} Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.520573 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.825812 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-48gg7"] Jan 28 11:25:44 crc kubenswrapper[4804]: I0128 11:25:44.204710 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerName="oauth-openshift" containerID="cri-o://79c1b2853938dfb6f36ee8ac10844c8a54c903878160557f1d100d526d8ed15d" gracePeriod=15 Jan 28 11:25:44 crc kubenswrapper[4804]: I0128 11:25:44.483767 4804 generic.go:334] "Generic (PLEG): container finished" podID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerID="79c1b2853938dfb6f36ee8ac10844c8a54c903878160557f1d100d526d8ed15d" exitCode=0 Jan 28 11:25:44 crc kubenswrapper[4804]: I0128 11:25:44.483855 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" event={"ID":"5054f20f-444d-40e8-ad18-3515e1ff2638","Type":"ContainerDied","Data":"79c1b2853938dfb6f36ee8ac10844c8a54c903878160557f1d100d526d8ed15d"} Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.221042 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.258615 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-59b95f96cf-ncf7v"] Jan 28 11:25:45 crc kubenswrapper[4804]: E0128 11:25:45.258998 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerName="oauth-openshift" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.259023 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerName="oauth-openshift" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.259182 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerName="oauth-openshift" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.259833 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.276856 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-59b95f96cf-ncf7v"] Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323294 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-dir\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323349 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-provider-selection\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323375 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-ocp-branding-template\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323400 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-session\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323427 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-login\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323453 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-serving-cert\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323472 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-service-ca\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323494 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-policies\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323513 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-idp-0-file-data\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323551 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-trusted-ca-bundle\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323574 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-error\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323601 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vl2p\" (UniqueName: \"kubernetes.io/projected/5054f20f-444d-40e8-ad18-3515e1ff2638-kube-api-access-6vl2p\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323619 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-cliconfig\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323636 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-router-certs\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.324338 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.325377 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.325475 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.325470 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.325705 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.330625 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.331117 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.331228 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.331449 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.331459 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5054f20f-444d-40e8-ad18-3515e1ff2638-kube-api-access-6vl2p" (OuterVolumeSpecName: "kube-api-access-6vl2p") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "kube-api-access-6vl2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.331568 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.331737 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.338435 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.339061 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.370927 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.371186 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.409151 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.424942 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mq2v\" (UniqueName: \"kubernetes.io/projected/219ecee2-929c-4499-b2d2-47264524ae3f-kube-api-access-9mq2v\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.424998 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-login\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425033 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-router-certs\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425053 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425072 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-service-ca\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425101 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425180 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-session\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425208 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425250 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/219ecee2-929c-4499-b2d2-47264524ae3f-audit-dir\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425276 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425305 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425332 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425354 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-audit-policies\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425375 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-error\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425421 4804 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425437 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425451 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425462 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425475 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425506 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425516 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425525 4804 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425534 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425543 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425552 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425561 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vl2p\" (UniqueName: \"kubernetes.io/projected/5054f20f-444d-40e8-ad18-3515e1ff2638-kube-api-access-6vl2p\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425569 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425579 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.490462 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" event={"ID":"5054f20f-444d-40e8-ad18-3515e1ff2638","Type":"ContainerDied","Data":"3604a1ed363f990559841eb45c533c06695cdf71dd2d767f1fae173b03ac7671"} Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.490755 4804 scope.go:117] "RemoveContainer" containerID="79c1b2853938dfb6f36ee8ac10844c8a54c903878160557f1d100d526d8ed15d" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.490645 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-48gg7" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="registry-server" containerID="cri-o://67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c" gracePeriod=2 Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.491137 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.523719 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-44lsd"] Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.524062 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-44lsd"] Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526475 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mq2v\" (UniqueName: \"kubernetes.io/projected/219ecee2-929c-4499-b2d2-47264524ae3f-kube-api-access-9mq2v\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526514 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-login\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526551 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-router-certs\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526576 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526607 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-service-ca\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526664 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526701 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-session\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526723 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526765 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/219ecee2-929c-4499-b2d2-47264524ae3f-audit-dir\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526789 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526814 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526845 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526868 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-audit-policies\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526911 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-error\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.528453 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/219ecee2-929c-4499-b2d2-47264524ae3f-audit-dir\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.530147 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-service-ca\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.530217 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.530776 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.530843 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-audit-policies\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.530980 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.531267 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-error\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.532540 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.533318 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.534673 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.535746 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-session\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.536470 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-login\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.538103 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-router-certs\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.541703 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.545898 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mq2v\" (UniqueName: \"kubernetes.io/projected/219ecee2-929c-4499-b2d2-47264524ae3f-kube-api-access-9mq2v\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.586036 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.748222 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.786814 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.914040 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.965051 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.004933 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.035674 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-catalog-content\") pod \"23f32834-88e4-454d-81fe-6370a2bc8e0b\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.035811 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6sc7\" (UniqueName: \"kubernetes.io/projected/23f32834-88e4-454d-81fe-6370a2bc8e0b-kube-api-access-l6sc7\") pod \"23f32834-88e4-454d-81fe-6370a2bc8e0b\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.035924 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-utilities\") pod \"23f32834-88e4-454d-81fe-6370a2bc8e0b\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.038906 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-utilities" (OuterVolumeSpecName: "utilities") pod "23f32834-88e4-454d-81fe-6370a2bc8e0b" (UID: "23f32834-88e4-454d-81fe-6370a2bc8e0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.046451 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f32834-88e4-454d-81fe-6370a2bc8e0b-kube-api-access-l6sc7" (OuterVolumeSpecName: "kube-api-access-l6sc7") pod "23f32834-88e4-454d-81fe-6370a2bc8e0b" (UID: "23f32834-88e4-454d-81fe-6370a2bc8e0b"). InnerVolumeSpecName "kube-api-access-l6sc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.050334 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-59b95f96cf-ncf7v"] Jan 28 11:25:46 crc kubenswrapper[4804]: W0128 11:25:46.058343 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod219ecee2_929c_4499_b2d2_47264524ae3f.slice/crio-f70ae130617f839c42825f641a1b8bb09974c627bed09aed92e2e190a7691f5c WatchSource:0}: Error finding container f70ae130617f839c42825f641a1b8bb09974c627bed09aed92e2e190a7691f5c: Status 404 returned error can't find the container with id f70ae130617f839c42825f641a1b8bb09974c627bed09aed92e2e190a7691f5c Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.089441 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23f32834-88e4-454d-81fe-6370a2bc8e0b" (UID: "23f32834-88e4-454d-81fe-6370a2bc8e0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.137805 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.137836 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.137851 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6sc7\" (UniqueName: \"kubernetes.io/projected/23f32834-88e4-454d-81fe-6370a2bc8e0b-kube-api-access-l6sc7\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.508679 4804 generic.go:334] "Generic (PLEG): container finished" podID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerID="67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c" exitCode=0 Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.508774 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.508773 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerDied","Data":"67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c"} Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.509651 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerDied","Data":"306a58f4bdfd74cc31f69b2bdc88525986d7ff5e31a732a9de2866902df8686e"} Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.509700 4804 scope.go:117] "RemoveContainer" containerID="67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.515352 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" event={"ID":"219ecee2-929c-4499-b2d2-47264524ae3f","Type":"ContainerStarted","Data":"17823d02e7f4945c97550eea1936a3f942b18a3ba5da12edf1c5d10a067f903a"} Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.515479 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" event={"ID":"219ecee2-929c-4499-b2d2-47264524ae3f","Type":"ContainerStarted","Data":"f70ae130617f839c42825f641a1b8bb09974c627bed09aed92e2e190a7691f5c"} Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.536067 4804 scope.go:117] "RemoveContainer" containerID="85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.558804 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" podStartSLOduration=27.558765887 podStartE2EDuration="27.558765887s" podCreationTimestamp="2026-01-28 11:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:46.555780928 +0000 UTC m=+222.350661002" watchObservedRunningTime="2026-01-28 11:25:46.558765887 +0000 UTC m=+222.353645911" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.574837 4804 scope.go:117] "RemoveContainer" containerID="3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.592775 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-48gg7"] Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.598696 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-48gg7"] Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.612474 4804 scope.go:117] "RemoveContainer" containerID="67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c" Jan 28 11:25:46 crc kubenswrapper[4804]: E0128 11:25:46.613832 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c\": container with ID starting with 67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c not found: ID does not exist" containerID="67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.613923 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c"} err="failed to get container status \"67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c\": rpc error: code = NotFound desc = could not find container \"67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c\": container with ID starting with 67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c not found: ID does not exist" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.613960 4804 scope.go:117] "RemoveContainer" containerID="85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc" Jan 28 11:25:46 crc kubenswrapper[4804]: E0128 11:25:46.615033 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc\": container with ID starting with 85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc not found: ID does not exist" containerID="85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.615150 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc"} err="failed to get container status \"85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc\": rpc error: code = NotFound desc = could not find container \"85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc\": container with ID starting with 85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc not found: ID does not exist" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.615181 4804 scope.go:117] "RemoveContainer" containerID="3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98" Jan 28 11:25:46 crc kubenswrapper[4804]: E0128 11:25:46.615771 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98\": container with ID starting with 3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98 not found: ID does not exist" containerID="3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.615855 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98"} err="failed to get container status \"3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98\": rpc error: code = NotFound desc = could not find container \"3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98\": container with ID starting with 3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98 not found: ID does not exist" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.629405 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4842n"] Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.924293 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" path="/var/lib/kubelet/pods/23f32834-88e4-454d-81fe-6370a2bc8e0b/volumes" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.925716 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" path="/var/lib/kubelet/pods/5054f20f-444d-40e8-ad18-3515e1ff2638/volumes" Jan 28 11:25:47 crc kubenswrapper[4804]: I0128 11:25:47.524421 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:47 crc kubenswrapper[4804]: I0128 11:25:47.535649 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.028707 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nw6s2"] Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.029097 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nw6s2" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="registry-server" containerID="cri-o://bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c" gracePeriod=2 Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.485803 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.530448 4804 generic.go:334] "Generic (PLEG): container finished" podID="759bdf85-0cca-46db-8126-fab61a8664a8" containerID="bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c" exitCode=0 Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.530510 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.530591 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6s2" event={"ID":"759bdf85-0cca-46db-8126-fab61a8664a8","Type":"ContainerDied","Data":"bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c"} Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.530631 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6s2" event={"ID":"759bdf85-0cca-46db-8126-fab61a8664a8","Type":"ContainerDied","Data":"8508960517ab52e83d2de6d52c76bf4bc148c42531ea9ecd0a9fb9ecc845cace"} Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.530653 4804 scope.go:117] "RemoveContainer" containerID="bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.531201 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4842n" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="registry-server" containerID="cri-o://9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104" gracePeriod=2 Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.549618 4804 scope.go:117] "RemoveContainer" containerID="6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.563577 4804 scope.go:117] "RemoveContainer" containerID="7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.575118 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n2zg\" (UniqueName: \"kubernetes.io/projected/759bdf85-0cca-46db-8126-fab61a8664a8-kube-api-access-9n2zg\") pod \"759bdf85-0cca-46db-8126-fab61a8664a8\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.575178 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-catalog-content\") pod \"759bdf85-0cca-46db-8126-fab61a8664a8\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.575217 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-utilities\") pod \"759bdf85-0cca-46db-8126-fab61a8664a8\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.576487 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-utilities" (OuterVolumeSpecName: "utilities") pod "759bdf85-0cca-46db-8126-fab61a8664a8" (UID: "759bdf85-0cca-46db-8126-fab61a8664a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.580730 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/759bdf85-0cca-46db-8126-fab61a8664a8-kube-api-access-9n2zg" (OuterVolumeSpecName: "kube-api-access-9n2zg") pod "759bdf85-0cca-46db-8126-fab61a8664a8" (UID: "759bdf85-0cca-46db-8126-fab61a8664a8"). InnerVolumeSpecName "kube-api-access-9n2zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.655274 4804 scope.go:117] "RemoveContainer" containerID="bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c" Jan 28 11:25:48 crc kubenswrapper[4804]: E0128 11:25:48.655822 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c\": container with ID starting with bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c not found: ID does not exist" containerID="bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.655862 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c"} err="failed to get container status \"bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c\": rpc error: code = NotFound desc = could not find container \"bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c\": container with ID starting with bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c not found: ID does not exist" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.655916 4804 scope.go:117] "RemoveContainer" containerID="6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c" Jan 28 11:25:48 crc kubenswrapper[4804]: E0128 11:25:48.656308 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c\": container with ID starting with 6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c not found: ID does not exist" containerID="6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.656344 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c"} err="failed to get container status \"6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c\": rpc error: code = NotFound desc = could not find container \"6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c\": container with ID starting with 6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c not found: ID does not exist" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.656371 4804 scope.go:117] "RemoveContainer" containerID="7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc" Jan 28 11:25:48 crc kubenswrapper[4804]: E0128 11:25:48.656764 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc\": container with ID starting with 7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc not found: ID does not exist" containerID="7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.656793 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc"} err="failed to get container status \"7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc\": rpc error: code = NotFound desc = could not find container \"7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc\": container with ID starting with 7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc not found: ID does not exist" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.678769 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n2zg\" (UniqueName: \"kubernetes.io/projected/759bdf85-0cca-46db-8126-fab61a8664a8-kube-api-access-9n2zg\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.678806 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.732309 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "759bdf85-0cca-46db-8126-fab61a8664a8" (UID: "759bdf85-0cca-46db-8126-fab61a8664a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.780330 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.859716 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nw6s2"] Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.862348 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nw6s2"] Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.921049 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" path="/var/lib/kubelet/pods/759bdf85-0cca-46db-8126-fab61a8664a8/volumes" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.955388 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.083675 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-catalog-content\") pod \"ac859130-1b71-4993-ab3d-66600459a32a\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.083745 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gn5v\" (UniqueName: \"kubernetes.io/projected/ac859130-1b71-4993-ab3d-66600459a32a-kube-api-access-4gn5v\") pod \"ac859130-1b71-4993-ab3d-66600459a32a\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.083788 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-utilities\") pod \"ac859130-1b71-4993-ab3d-66600459a32a\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.084647 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-utilities" (OuterVolumeSpecName: "utilities") pod "ac859130-1b71-4993-ab3d-66600459a32a" (UID: "ac859130-1b71-4993-ab3d-66600459a32a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.087750 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac859130-1b71-4993-ab3d-66600459a32a-kube-api-access-4gn5v" (OuterVolumeSpecName: "kube-api-access-4gn5v") pod "ac859130-1b71-4993-ab3d-66600459a32a" (UID: "ac859130-1b71-4993-ab3d-66600459a32a"). InnerVolumeSpecName "kube-api-access-4gn5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.102765 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac859130-1b71-4993-ab3d-66600459a32a" (UID: "ac859130-1b71-4993-ab3d-66600459a32a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.185367 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.185402 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gn5v\" (UniqueName: \"kubernetes.io/projected/ac859130-1b71-4993-ab3d-66600459a32a-kube-api-access-4gn5v\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.185424 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.538070 4804 generic.go:334] "Generic (PLEG): container finished" podID="ac859130-1b71-4993-ab3d-66600459a32a" containerID="9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104" exitCode=0 Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.538124 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.538139 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerDied","Data":"9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104"} Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.538547 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerDied","Data":"035e3af2de47d3eb7c5ac704bd99c258ff7c2e427ae366507d1020ab4549c195"} Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.538570 4804 scope.go:117] "RemoveContainer" containerID="9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.557107 4804 scope.go:117] "RemoveContainer" containerID="f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.565068 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4842n"] Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.567145 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4842n"] Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.583159 4804 scope.go:117] "RemoveContainer" containerID="f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.600559 4804 scope.go:117] "RemoveContainer" containerID="9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104" Jan 28 11:25:49 crc kubenswrapper[4804]: E0128 11:25:49.601009 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104\": container with ID starting with 9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104 not found: ID does not exist" containerID="9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.601076 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104"} err="failed to get container status \"9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104\": rpc error: code = NotFound desc = could not find container \"9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104\": container with ID starting with 9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104 not found: ID does not exist" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.601111 4804 scope.go:117] "RemoveContainer" containerID="f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2" Jan 28 11:25:49 crc kubenswrapper[4804]: E0128 11:25:49.601521 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2\": container with ID starting with f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2 not found: ID does not exist" containerID="f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.601553 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2"} err="failed to get container status \"f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2\": rpc error: code = NotFound desc = could not find container \"f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2\": container with ID starting with f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2 not found: ID does not exist" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.601570 4804 scope.go:117] "RemoveContainer" containerID="f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f" Jan 28 11:25:49 crc kubenswrapper[4804]: E0128 11:25:49.601859 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f\": container with ID starting with f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f not found: ID does not exist" containerID="f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.601973 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f"} err="failed to get container status \"f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f\": rpc error: code = NotFound desc = could not find container \"f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f\": container with ID starting with f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f not found: ID does not exist" Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.305715 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56fdbb7f67-z2wch"] Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.305959 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" podUID="2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" containerName="controller-manager" containerID="cri-o://dcf37b6c911b3e514bb3b3ee27eb41bb185440fd1374b3d272b1617c0225492f" gracePeriod=30 Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.411058 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl"] Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.411373 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" podUID="779944ca-d8be-40c0-89ac-1e1b3208eed2" containerName="route-controller-manager" containerID="cri-o://9fdf0cdcfdf78c20f986e02639f79a0a492e63a0752874cb68d5057b00ae4186" gracePeriod=30 Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.547488 4804 generic.go:334] "Generic (PLEG): container finished" podID="779944ca-d8be-40c0-89ac-1e1b3208eed2" containerID="9fdf0cdcfdf78c20f986e02639f79a0a492e63a0752874cb68d5057b00ae4186" exitCode=0 Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.547578 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" event={"ID":"779944ca-d8be-40c0-89ac-1e1b3208eed2","Type":"ContainerDied","Data":"9fdf0cdcfdf78c20f986e02639f79a0a492e63a0752874cb68d5057b00ae4186"} Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.549469 4804 generic.go:334] "Generic (PLEG): container finished" podID="2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" containerID="dcf37b6c911b3e514bb3b3ee27eb41bb185440fd1374b3d272b1617c0225492f" exitCode=0 Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.549525 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" event={"ID":"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9","Type":"ContainerDied","Data":"dcf37b6c911b3e514bb3b3ee27eb41bb185440fd1374b3d272b1617c0225492f"} Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.930591 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac859130-1b71-4993-ab3d-66600459a32a" path="/var/lib/kubelet/pods/ac859130-1b71-4993-ab3d-66600459a32a/volumes" Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.942971 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.013117 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106302 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-client-ca\") pod \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106394 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrp5k\" (UniqueName: \"kubernetes.io/projected/779944ca-d8be-40c0-89ac-1e1b3208eed2-kube-api-access-nrp5k\") pod \"779944ca-d8be-40c0-89ac-1e1b3208eed2\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106427 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-client-ca\") pod \"779944ca-d8be-40c0-89ac-1e1b3208eed2\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106479 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-proxy-ca-bundles\") pod \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106504 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779944ca-d8be-40c0-89ac-1e1b3208eed2-serving-cert\") pod \"779944ca-d8be-40c0-89ac-1e1b3208eed2\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106548 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-config\") pod \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106602 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config\") pod \"779944ca-d8be-40c0-89ac-1e1b3208eed2\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106688 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-serving-cert\") pod \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106707 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhhvx\" (UniqueName: \"kubernetes.io/projected/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-kube-api-access-jhhvx\") pod \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.108181 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config" (OuterVolumeSpecName: "config") pod "779944ca-d8be-40c0-89ac-1e1b3208eed2" (UID: "779944ca-d8be-40c0-89ac-1e1b3208eed2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.108247 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-client-ca" (OuterVolumeSpecName: "client-ca") pod "779944ca-d8be-40c0-89ac-1e1b3208eed2" (UID: "779944ca-d8be-40c0-89ac-1e1b3208eed2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.108523 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" (UID: "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.107741 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-client-ca" (OuterVolumeSpecName: "client-ca") pod "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" (UID: "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.108655 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-config" (OuterVolumeSpecName: "config") pod "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" (UID: "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.111656 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" (UID: "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.112195 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-kube-api-access-jhhvx" (OuterVolumeSpecName: "kube-api-access-jhhvx") pod "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" (UID: "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9"). InnerVolumeSpecName "kube-api-access-jhhvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.111689 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/779944ca-d8be-40c0-89ac-1e1b3208eed2-kube-api-access-nrp5k" (OuterVolumeSpecName: "kube-api-access-nrp5k") pod "779944ca-d8be-40c0-89ac-1e1b3208eed2" (UID: "779944ca-d8be-40c0-89ac-1e1b3208eed2"). InnerVolumeSpecName "kube-api-access-nrp5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.111791 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/779944ca-d8be-40c0-89ac-1e1b3208eed2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "779944ca-d8be-40c0-89ac-1e1b3208eed2" (UID: "779944ca-d8be-40c0-89ac-1e1b3208eed2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.208538 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrp5k\" (UniqueName: \"kubernetes.io/projected/779944ca-d8be-40c0-89ac-1e1b3208eed2-kube-api-access-nrp5k\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.208839 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.208945 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.209024 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779944ca-d8be-40c0-89ac-1e1b3208eed2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.209090 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.209147 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.209220 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.209277 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhhvx\" (UniqueName: \"kubernetes.io/projected/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-kube-api-access-jhhvx\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.209331 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.558532 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" event={"ID":"779944ca-d8be-40c0-89ac-1e1b3208eed2","Type":"ContainerDied","Data":"293582aed8c33e749713ca1eaf41ccd4f918856799864bbaa15b9045968b7da1"} Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.559197 4804 scope.go:117] "RemoveContainer" containerID="9fdf0cdcfdf78c20f986e02639f79a0a492e63a0752874cb68d5057b00ae4186" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.558582 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.560054 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" event={"ID":"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9","Type":"ContainerDied","Data":"ea883bc2d51aafd97d5cd59b8bd8970b0e6abb434f296bbe9aa56e76957157f5"} Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.560120 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.581393 4804 scope.go:117] "RemoveContainer" containerID="dcf37b6c911b3e514bb3b3ee27eb41bb185440fd1374b3d272b1617c0225492f" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.606941 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56fdbb7f67-z2wch"] Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.617367 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56fdbb7f67-z2wch"] Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.621942 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl"] Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.625541 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl"] Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335447 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57b5894978-jsfxt"] Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335812 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="extract-content" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335836 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="extract-content" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335850 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335858 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335874 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335897 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335906 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="extract-utilities" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335913 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="extract-utilities" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335924 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" containerName="controller-manager" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335932 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" containerName="controller-manager" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335966 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="extract-content" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335977 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="extract-content" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335988 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335995 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.336008 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="extract-content" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336015 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="extract-content" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.336026 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="779944ca-d8be-40c0-89ac-1e1b3208eed2" containerName="route-controller-manager" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336033 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="779944ca-d8be-40c0-89ac-1e1b3208eed2" containerName="route-controller-manager" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.336048 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="extract-utilities" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336055 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="extract-utilities" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.336064 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="extract-utilities" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336071 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="extract-utilities" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336227 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" containerName="controller-manager" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336246 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336257 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336268 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336275 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="779944ca-d8be-40c0-89ac-1e1b3208eed2" containerName="route-controller-manager" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336668 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9"] Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.337286 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.337770 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.340563 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.341675 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.342212 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.342224 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.342293 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.342302 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.342799 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.343095 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.343181 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.343524 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.343724 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.343751 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.356467 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9"] Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.362555 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b5894978-jsfxt"] Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.364943 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.425783 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-client-ca\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.426125 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqbd7\" (UniqueName: \"kubernetes.io/projected/be636092-9be6-463c-ae35-758569ce2211-kube-api-access-sqbd7\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.426457 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a543af3-067a-4432-8d29-3b98286e3b7f-config\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.426638 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a543af3-067a-4432-8d29-3b98286e3b7f-client-ca\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.426816 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be636092-9be6-463c-ae35-758569ce2211-serving-cert\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.426990 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-config\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.427145 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a543af3-067a-4432-8d29-3b98286e3b7f-serving-cert\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.427304 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-proxy-ca-bundles\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.427519 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz2pp\" (UniqueName: \"kubernetes.io/projected/6a543af3-067a-4432-8d29-3b98286e3b7f-kube-api-access-kz2pp\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.528109 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be636092-9be6-463c-ae35-758569ce2211-serving-cert\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.528348 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-config\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.528485 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a543af3-067a-4432-8d29-3b98286e3b7f-serving-cert\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.528601 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-proxy-ca-bundles\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.528741 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz2pp\" (UniqueName: \"kubernetes.io/projected/6a543af3-067a-4432-8d29-3b98286e3b7f-kube-api-access-kz2pp\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.529250 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqbd7\" (UniqueName: \"kubernetes.io/projected/be636092-9be6-463c-ae35-758569ce2211-kube-api-access-sqbd7\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.529962 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-client-ca\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.529993 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a543af3-067a-4432-8d29-3b98286e3b7f-config\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.530076 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a543af3-067a-4432-8d29-3b98286e3b7f-client-ca\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.529846 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-proxy-ca-bundles\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.529833 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-config\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.530577 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-client-ca\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.532676 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a543af3-067a-4432-8d29-3b98286e3b7f-config\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.532817 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a543af3-067a-4432-8d29-3b98286e3b7f-client-ca\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.537988 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a543af3-067a-4432-8d29-3b98286e3b7f-serving-cert\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.538214 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be636092-9be6-463c-ae35-758569ce2211-serving-cert\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.544024 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz2pp\" (UniqueName: \"kubernetes.io/projected/6a543af3-067a-4432-8d29-3b98286e3b7f-kube-api-access-kz2pp\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.544749 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqbd7\" (UniqueName: \"kubernetes.io/projected/be636092-9be6-463c-ae35-758569ce2211-kube-api-access-sqbd7\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.682349 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.699991 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.906032 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9"] Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.927116 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" path="/var/lib/kubelet/pods/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9/volumes" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.929905 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="779944ca-d8be-40c0-89ac-1e1b3208eed2" path="/var/lib/kubelet/pods/779944ca-d8be-40c0-89ac-1e1b3208eed2/volumes" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.958404 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b5894978-jsfxt"] Jan 28 11:25:52 crc kubenswrapper[4804]: W0128 11:25:52.978289 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe636092_9be6_463c_ae35_758569ce2211.slice/crio-05b007a8caebb64998575b0027e380be53d8882d86f6e6d42648be9d2a6cfb3e WatchSource:0}: Error finding container 05b007a8caebb64998575b0027e380be53d8882d86f6e6d42648be9d2a6cfb3e: Status 404 returned error can't find the container with id 05b007a8caebb64998575b0027e380be53d8882d86f6e6d42648be9d2a6cfb3e Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.579293 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" event={"ID":"be636092-9be6-463c-ae35-758569ce2211","Type":"ContainerStarted","Data":"629ea96b15a2c2dabf3f8d6b99390dc1148c11deb1b59f66ded8d6d41e0aa9f5"} Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.581042 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" event={"ID":"be636092-9be6-463c-ae35-758569ce2211","Type":"ContainerStarted","Data":"05b007a8caebb64998575b0027e380be53d8882d86f6e6d42648be9d2a6cfb3e"} Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.581139 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.582774 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" event={"ID":"6a543af3-067a-4432-8d29-3b98286e3b7f","Type":"ContainerStarted","Data":"3c687d2da989738286afbb0597757621a25e79ba8c4a925728be40a3100df54d"} Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.582801 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" event={"ID":"6a543af3-067a-4432-8d29-3b98286e3b7f","Type":"ContainerStarted","Data":"1c118399712884cdd129b26e4112bfc42c2aea446d0d9a01af8e8deaec8869c5"} Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.583058 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.585701 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.594453 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.607019 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" podStartSLOduration=3.606987826 podStartE2EDuration="3.606987826s" podCreationTimestamp="2026-01-28 11:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:53.599161597 +0000 UTC m=+229.394041591" watchObservedRunningTime="2026-01-28 11:25:53.606987826 +0000 UTC m=+229.401867810" Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.626539 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" podStartSLOduration=3.626516984 podStartE2EDuration="3.626516984s" podCreationTimestamp="2026-01-28 11:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:53.616860774 +0000 UTC m=+229.411740758" watchObservedRunningTime="2026-01-28 11:25:53.626516984 +0000 UTC m=+229.421396968" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.170638 4804 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.173751 4804 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.173789 4804 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.173914 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174274 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e" gracePeriod=15 Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174361 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174378 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174390 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174397 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174409 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174417 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174434 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174442 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174454 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174463 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174476 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174484 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174590 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e" gracePeriod=15 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174608 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174623 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174641 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174650 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174660 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174692 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051" gracePeriod=15 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174750 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029" gracePeriod=15 Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174781 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174792 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174800 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81" gracePeriod=15 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174928 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.176852 4804 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334409 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334785 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334815 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334876 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334920 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334941 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334980 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.335012 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.435872 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.435945 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.435970 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436008 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436022 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436038 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436078 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436116 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436149 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436164 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436187 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436204 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436216 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436227 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436243 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436264 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.633349 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.635662 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.636564 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e" exitCode=0 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.636589 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051" exitCode=0 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.636597 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029" exitCode=0 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.636605 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81" exitCode=2 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.636675 4804 scope.go:117] "RemoveContainer" containerID="dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.638817 4804 generic.go:334] "Generic (PLEG): container finished" podID="b357b6a6-77f2-483a-8689-9ec35a8d3008" containerID="f527c2fa450cb1d21059874ecde9cc59de23295afb4043919e5157ab805c5185" exitCode=0 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.638848 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b357b6a6-77f2-483a-8689-9ec35a8d3008","Type":"ContainerDied","Data":"f527c2fa450cb1d21059874ecde9cc59de23295afb4043919e5157ab805c5185"} Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.639626 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:01 crc kubenswrapper[4804]: I0128 11:26:01.648911 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 11:26:01 crc kubenswrapper[4804]: I0128 11:26:01.997210 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:26:01 crc kubenswrapper[4804]: I0128 11:26:01.997924 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.156124 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-kubelet-dir\") pod \"b357b6a6-77f2-483a-8689-9ec35a8d3008\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.156184 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b357b6a6-77f2-483a-8689-9ec35a8d3008-kube-api-access\") pod \"b357b6a6-77f2-483a-8689-9ec35a8d3008\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.156260 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-var-lock\") pod \"b357b6a6-77f2-483a-8689-9ec35a8d3008\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.156472 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-var-lock" (OuterVolumeSpecName: "var-lock") pod "b357b6a6-77f2-483a-8689-9ec35a8d3008" (UID: "b357b6a6-77f2-483a-8689-9ec35a8d3008"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.156500 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b357b6a6-77f2-483a-8689-9ec35a8d3008" (UID: "b357b6a6-77f2-483a-8689-9ec35a8d3008"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.161837 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b357b6a6-77f2-483a-8689-9ec35a8d3008-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b357b6a6-77f2-483a-8689-9ec35a8d3008" (UID: "b357b6a6-77f2-483a-8689-9ec35a8d3008"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.257606 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b357b6a6-77f2-483a-8689-9ec35a8d3008-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.257918 4804 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.257927 4804 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.532274 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.533151 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.533588 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.533974 4804 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.657152 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.657937 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e" exitCode=0 Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.658007 4804 scope.go:117] "RemoveContainer" containerID="a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.658064 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.659647 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b357b6a6-77f2-483a-8689-9ec35a8d3008","Type":"ContainerDied","Data":"1f7f9ceafdf7d00d9bfd7448074f1a52a2999efacee1059cdf48132d46ccbaba"} Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.659674 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f7f9ceafdf7d00d9bfd7448074f1a52a2999efacee1059cdf48132d46ccbaba" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.659697 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.662701 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.662779 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.662795 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.662863 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.662871 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.663010 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.663087 4804 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.663104 4804 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.663114 4804 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.672705 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.673105 4804 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.674655 4804 scope.go:117] "RemoveContainer" containerID="b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.686825 4804 scope.go:117] "RemoveContainer" containerID="cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.701222 4804 scope.go:117] "RemoveContainer" containerID="82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.714490 4804 scope.go:117] "RemoveContainer" containerID="a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.729335 4804 scope.go:117] "RemoveContainer" containerID="add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.747786 4804 scope.go:117] "RemoveContainer" containerID="a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e" Jan 28 11:26:02 crc kubenswrapper[4804]: E0128 11:26:02.748354 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\": container with ID starting with a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e not found: ID does not exist" containerID="a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.748423 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e"} err="failed to get container status \"a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\": rpc error: code = NotFound desc = could not find container \"a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\": container with ID starting with a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e not found: ID does not exist" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.748486 4804 scope.go:117] "RemoveContainer" containerID="b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051" Jan 28 11:26:02 crc kubenswrapper[4804]: E0128 11:26:02.748903 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\": container with ID starting with b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051 not found: ID does not exist" containerID="b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.748960 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051"} err="failed to get container status \"b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\": rpc error: code = NotFound desc = could not find container \"b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\": container with ID starting with b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051 not found: ID does not exist" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.748991 4804 scope.go:117] "RemoveContainer" containerID="cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029" Jan 28 11:26:02 crc kubenswrapper[4804]: E0128 11:26:02.749367 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\": container with ID starting with cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029 not found: ID does not exist" containerID="cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.749417 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029"} err="failed to get container status \"cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\": rpc error: code = NotFound desc = could not find container \"cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\": container with ID starting with cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029 not found: ID does not exist" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.749461 4804 scope.go:117] "RemoveContainer" containerID="82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81" Jan 28 11:26:02 crc kubenswrapper[4804]: E0128 11:26:02.749748 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\": container with ID starting with 82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81 not found: ID does not exist" containerID="82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.749775 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81"} err="failed to get container status \"82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\": rpc error: code = NotFound desc = could not find container \"82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\": container with ID starting with 82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81 not found: ID does not exist" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.749792 4804 scope.go:117] "RemoveContainer" containerID="a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e" Jan 28 11:26:02 crc kubenswrapper[4804]: E0128 11:26:02.750194 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\": container with ID starting with a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e not found: ID does not exist" containerID="a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.750218 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e"} err="failed to get container status \"a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\": rpc error: code = NotFound desc = could not find container \"a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\": container with ID starting with a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e not found: ID does not exist" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.750232 4804 scope.go:117] "RemoveContainer" containerID="add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1" Jan 28 11:26:02 crc kubenswrapper[4804]: E0128 11:26:02.750535 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\": container with ID starting with add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1 not found: ID does not exist" containerID="add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.750559 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1"} err="failed to get container status \"add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\": rpc error: code = NotFound desc = could not find container \"add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\": container with ID starting with add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1 not found: ID does not exist" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.920677 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.962424 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.963153 4804 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:04 crc kubenswrapper[4804]: I0128 11:26:04.918559 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:04 crc kubenswrapper[4804]: I0128 11:26:04.919184 4804 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.204451 4804 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:05 crc kubenswrapper[4804]: I0128 11:26:05.205016 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:05 crc kubenswrapper[4804]: W0128 11:26:05.224180 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-be051426154fe61dbd5ae81bb9fe36b129599de48393184ae0a3a18c2effe04c WatchSource:0}: Error finding container be051426154fe61dbd5ae81bb9fe36b129599de48393184ae0a3a18c2effe04c: Status 404 returned error can't find the container with id be051426154fe61dbd5ae81bb9fe36b129599de48393184ae0a3a18c2effe04c Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.230977 4804 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.27:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ee16dc8410f5c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 11:26:05.230575452 +0000 UTC m=+241.025455436,LastTimestamp:2026-01-28 11:26:05.230575452 +0000 UTC m=+241.025455436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.462067 4804 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.462992 4804 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.463409 4804 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.463758 4804 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.464075 4804 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: I0128 11:26:05.464163 4804 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.464484 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="200ms" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.665392 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="400ms" Jan 28 11:26:05 crc kubenswrapper[4804]: I0128 11:26:05.676198 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b"} Jan 28 11:26:05 crc kubenswrapper[4804]: I0128 11:26:05.676250 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"be051426154fe61dbd5ae81bb9fe36b129599de48393184ae0a3a18c2effe04c"} Jan 28 11:26:05 crc kubenswrapper[4804]: I0128 11:26:05.677108 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.677223 4804 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:06 crc kubenswrapper[4804]: E0128 11:26:06.066405 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="800ms" Jan 28 11:26:07 crc kubenswrapper[4804]: E0128 11:26:07.095200 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="1.6s" Jan 28 11:26:08 crc kubenswrapper[4804]: E0128 11:26:08.696094 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="3.2s" Jan 28 11:26:11 crc kubenswrapper[4804]: E0128 11:26:11.896955 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="6.4s" Jan 28 11:26:12 crc kubenswrapper[4804]: E0128 11:26:12.759366 4804 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.27:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ee16dc8410f5c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 11:26:05.230575452 +0000 UTC m=+241.025455436,LastTimestamp:2026-01-28 11:26:05.230575452 +0000 UTC m=+241.025455436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 11:26:12 crc kubenswrapper[4804]: I0128 11:26:12.915040 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:12 crc kubenswrapper[4804]: I0128 11:26:12.916952 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:12 crc kubenswrapper[4804]: I0128 11:26:12.934614 4804 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:12 crc kubenswrapper[4804]: I0128 11:26:12.934709 4804 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:12 crc kubenswrapper[4804]: E0128 11:26:12.935438 4804 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:12 crc kubenswrapper[4804]: I0128 11:26:12.936228 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:13 crc kubenswrapper[4804]: I0128 11:26:13.719943 4804 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4164ba9c34e39731302feb9e8d26eec3c5c006ee15174972812b45fc1503d60c" exitCode=0 Jan 28 11:26:13 crc kubenswrapper[4804]: I0128 11:26:13.720040 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4164ba9c34e39731302feb9e8d26eec3c5c006ee15174972812b45fc1503d60c"} Jan 28 11:26:13 crc kubenswrapper[4804]: I0128 11:26:13.720204 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2f01abdff9752f0be3a033a99749a23ee1d341b7eec6c97b1b4bfc4632ccfd61"} Jan 28 11:26:13 crc kubenswrapper[4804]: I0128 11:26:13.720481 4804 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:13 crc kubenswrapper[4804]: I0128 11:26:13.720496 4804 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:13 crc kubenswrapper[4804]: I0128 11:26:13.720928 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:13 crc kubenswrapper[4804]: E0128 11:26:13.720930 4804 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.741592 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.741666 4804 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18" exitCode=1 Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.741787 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18"} Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.742382 4804 scope.go:117] "RemoveContainer" containerID="8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18" Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.750524 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9dcacc22419228f8a6f17e6067b29106abbfd910a105c01de60cb2fb3418d6f4"} Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.750586 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ef9c5c4c7ab1908f624ea8804a418a1b5ea85b984a502be7f347e7dc5d0b3a76"} Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.750598 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7e9f564a7bae30192accf7302dd9ea1753b812d4118c2d90b0187db04cc2adbd"} Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.750608 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"baaddbcae154f2448017c041d454a4011e6bb1a309c4f9f8577c627358883f20"} Jan 28 11:26:15 crc kubenswrapper[4804]: I0128 11:26:15.781110 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 11:26:15 crc kubenswrapper[4804]: I0128 11:26:15.781601 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b97b33b8da7f4ebb0737883285456cbd33eaf784f8224902c085a19924d66810"} Jan 28 11:26:15 crc kubenswrapper[4804]: I0128 11:26:15.786841 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e2d04acc5aa14fd8cd0658da7e77cfd4aba02dc55df8377e4c641dd8a429330e"} Jan 28 11:26:15 crc kubenswrapper[4804]: I0128 11:26:15.787075 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:15 crc kubenswrapper[4804]: I0128 11:26:15.787257 4804 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:15 crc kubenswrapper[4804]: I0128 11:26:15.787300 4804 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:17 crc kubenswrapper[4804]: I0128 11:26:17.937022 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:17 crc kubenswrapper[4804]: I0128 11:26:17.937300 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:17 crc kubenswrapper[4804]: I0128 11:26:17.942406 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:20 crc kubenswrapper[4804]: I0128 11:26:20.796039 4804 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:20 crc kubenswrapper[4804]: I0128 11:26:20.822731 4804 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:20 crc kubenswrapper[4804]: I0128 11:26:20.822789 4804 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:20 crc kubenswrapper[4804]: I0128 11:26:20.826970 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:20 crc kubenswrapper[4804]: I0128 11:26:20.829783 4804 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c8734f58-f5ce-4a42-8c7f-0620c5bede02" Jan 28 11:26:21 crc kubenswrapper[4804]: I0128 11:26:21.322462 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:26:21 crc kubenswrapper[4804]: I0128 11:26:21.326532 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:26:21 crc kubenswrapper[4804]: I0128 11:26:21.827940 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:26:21 crc kubenswrapper[4804]: I0128 11:26:21.828114 4804 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:21 crc kubenswrapper[4804]: I0128 11:26:21.828140 4804 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:24 crc kubenswrapper[4804]: I0128 11:26:24.144533 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:26:24 crc kubenswrapper[4804]: I0128 11:26:24.926298 4804 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c8734f58-f5ce-4a42-8c7f-0620c5bede02" Jan 28 11:26:30 crc kubenswrapper[4804]: I0128 11:26:30.180745 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 11:26:30 crc kubenswrapper[4804]: I0128 11:26:30.406340 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 11:26:30 crc kubenswrapper[4804]: I0128 11:26:30.480035 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 11:26:30 crc kubenswrapper[4804]: I0128 11:26:30.906418 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 11:26:31 crc kubenswrapper[4804]: I0128 11:26:31.150834 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 11:26:31 crc kubenswrapper[4804]: I0128 11:26:31.204333 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 11:26:31 crc kubenswrapper[4804]: I0128 11:26:31.961643 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 11:26:32 crc kubenswrapper[4804]: I0128 11:26:32.163333 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 11:26:32 crc kubenswrapper[4804]: I0128 11:26:32.208480 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 11:26:32 crc kubenswrapper[4804]: I0128 11:26:32.494505 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.042341 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.114270 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.383084 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.432965 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.496043 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.523794 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.524426 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.580250 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.623680 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.645058 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.743247 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.953155 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.992785 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.058235 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.066063 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.207413 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.230143 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.406797 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.407517 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.423267 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.756202 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.808304 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.827316 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.836244 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.922520 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.044461 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.286619 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.293343 4804 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.299035 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.299097 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.303386 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.317202 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.317188427 podStartE2EDuration="15.317188427s" podCreationTimestamp="2026-01-28 11:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:26:35.315185297 +0000 UTC m=+271.110065301" watchObservedRunningTime="2026-01-28 11:26:35.317188427 +0000 UTC m=+271.112068411" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.333467 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.344027 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.388256 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.399912 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.400639 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.406812 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.422744 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.434919 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.449966 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.471590 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.664131 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.754977 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.798782 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.800041 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.918668 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.938296 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.063695 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.148563 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.150187 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.190527 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.194403 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.227957 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.245725 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.306378 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.325869 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.359105 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.437167 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.452068 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.658992 4804 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.762678 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.810400 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.835059 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.857900 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.927017 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.082922 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.102928 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.104289 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.136570 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.161650 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.250814 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.251626 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.489470 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.512638 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.516499 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.529537 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.557590 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.568135 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.579447 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.636829 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.747421 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.807026 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.812188 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.817024 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.856207 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.864641 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.885325 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.901802 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.919256 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.939437 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.002838 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.013092 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.020430 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.078914 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.248772 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.311062 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.391266 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.464945 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.465482 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.478026 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.501049 4804 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.656181 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.658223 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.672803 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.732297 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.751700 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.801231 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.853120 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.875698 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.900956 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.974776 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.013548 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.063570 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.101925 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.178929 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.236827 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.258294 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.303956 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.388190 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.401366 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.412309 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.437507 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.530443 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.568738 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.612050 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.741657 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.759294 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.964741 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.984545 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.003782 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.059624 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.283563 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.313147 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.446392 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.464027 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.568011 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.576792 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.608409 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.632549 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.633555 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.640762 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.640797 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.690216 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.782450 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.782463 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.841379 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.851608 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.905146 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.911520 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.921941 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.954616 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.972359 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.993573 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.003678 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.107618 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.160707 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.181754 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.271400 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.406620 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.439078 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.440496 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.444377 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.450686 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.482052 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.493118 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.647781 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.713334 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.953330 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.953868 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.989955 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:41.992770 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.062514 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.081225 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.082436 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.114429 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.145482 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.174089 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.232300 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.378231 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.510641 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.582931 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.604687 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.619165 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.771156 4804 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.865730 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.899934 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.910742 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.948694 4804 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.949041 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b" gracePeriod=5 Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.987677 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.992551 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.152638 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.206148 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.239500 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.250679 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.262620 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.331750 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.334099 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.488134 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.501494 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.518015 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.559872 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.641165 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.641545 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.762347 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.837330 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.838240 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.839624 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.972746 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.974009 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.981309 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.013270 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.035689 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.100981 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.149463 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.151229 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.262424 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.320288 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.381022 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.398088 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.426034 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.465258 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.652533 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.667753 4804 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.703565 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.836279 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.844031 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.915453 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.928912 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.939619 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 11:26:45 crc kubenswrapper[4804]: I0128 11:26:45.079400 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 11:26:45 crc kubenswrapper[4804]: I0128 11:26:45.079671 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 11:26:45 crc kubenswrapper[4804]: I0128 11:26:45.402111 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 11:26:45 crc kubenswrapper[4804]: I0128 11:26:45.609596 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 11:26:45 crc kubenswrapper[4804]: I0128 11:26:45.626279 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 11:26:46 crc kubenswrapper[4804]: I0128 11:26:46.090864 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 11:26:46 crc kubenswrapper[4804]: I0128 11:26:46.184337 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 11:26:46 crc kubenswrapper[4804]: I0128 11:26:46.397389 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 11:26:46 crc kubenswrapper[4804]: I0128 11:26:46.575505 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 11:26:46 crc kubenswrapper[4804]: I0128 11:26:46.819219 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.116005 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.118191 4804 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.461442 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.488338 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gw5tb"] Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.489185 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gw5tb" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="registry-server" containerID="cri-o://fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5" gracePeriod=30 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.497737 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hzmvb"] Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.498042 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hzmvb" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="registry-server" containerID="cri-o://42f2eeb16ac98652bf013b7ae171fa09175e007ba579b10ded8267ad8190a2a1" gracePeriod=30 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.505917 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ml79j"] Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.506124 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" containerID="cri-o://4bceb0781d7092ea24802dae15015144fbb316afc1359d2ddad36759cb909c58" gracePeriod=30 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.517062 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b7c6"] Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.517280 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9b7c6" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="registry-server" containerID="cri-o://3fd567f1f3948b02442e51054aa407e5b7de7526347804594426ab16143ecc19" gracePeriod=30 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.530178 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmw4q"] Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.530422 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jmw4q" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="registry-server" containerID="cri-o://631aa94b77086e59cc4974535410f633d8a570238e4eb3b9012dadf08b6ae781" gracePeriod=30 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.565544 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s76k6"] Jan 28 11:26:47 crc kubenswrapper[4804]: E0128 11:26:47.566175 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" containerName="installer" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.566201 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" containerName="installer" Jan 28 11:26:47 crc kubenswrapper[4804]: E0128 11:26:47.566229 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.566238 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.566529 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" containerName="installer" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.566565 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.567294 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.593642 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s76k6"] Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.636284 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.636376 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbctf\" (UniqueName: \"kubernetes.io/projected/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-kube-api-access-sbctf\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.636446 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: E0128 11:26:47.733952 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb641b655_0d3e_4838_8c87_fc72873f1944.slice/crio-conmon-631aa94b77086e59cc4974535410f633d8a570238e4eb3b9012dadf08b6ae781.scope\": RecentStats: unable to find data in memory cache]" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.739434 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.739494 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.739555 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbctf\" (UniqueName: \"kubernetes.io/projected/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-kube-api-access-sbctf\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.741055 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.748693 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.761497 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbctf\" (UniqueName: \"kubernetes.io/projected/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-kube-api-access-sbctf\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.830721 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.933481 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.974066 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.992631 4804 generic.go:334] "Generic (PLEG): container finished" podID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerID="fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5" exitCode=0 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.992689 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw5tb" event={"ID":"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d","Type":"ContainerDied","Data":"fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5"} Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.992715 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw5tb" event={"ID":"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d","Type":"ContainerDied","Data":"60c5c3bae740bf47c18e8908e6f28f0a1a7fe1ff6bab40703594d2789651297c"} Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.992732 4804 scope.go:117] "RemoveContainer" containerID="fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.992836 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.995847 4804 generic.go:334] "Generic (PLEG): container finished" podID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerID="42f2eeb16ac98652bf013b7ae171fa09175e007ba579b10ded8267ad8190a2a1" exitCode=0 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.995904 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerDied","Data":"42f2eeb16ac98652bf013b7ae171fa09175e007ba579b10ded8267ad8190a2a1"} Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.006630 4804 generic.go:334] "Generic (PLEG): container finished" podID="bb959019-0f9d-4210-8410-6b3c00b02337" containerID="4bceb0781d7092ea24802dae15015144fbb316afc1359d2ddad36759cb909c58" exitCode=0 Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.006707 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" event={"ID":"bb959019-0f9d-4210-8410-6b3c00b02337","Type":"ContainerDied","Data":"4bceb0781d7092ea24802dae15015144fbb316afc1359d2ddad36759cb909c58"} Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.012278 4804 generic.go:334] "Generic (PLEG): container finished" podID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerID="3fd567f1f3948b02442e51054aa407e5b7de7526347804594426ab16143ecc19" exitCode=0 Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.012358 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerDied","Data":"3fd567f1f3948b02442e51054aa407e5b7de7526347804594426ab16143ecc19"} Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.016529 4804 generic.go:334] "Generic (PLEG): container finished" podID="b641b655-0d3e-4838-8c87-fc72873f1944" containerID="631aa94b77086e59cc4974535410f633d8a570238e4eb3b9012dadf08b6ae781" exitCode=0 Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.016555 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerDied","Data":"631aa94b77086e59cc4974535410f633d8a570238e4eb3b9012dadf08b6ae781"} Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.044154 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-utilities\") pod \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.044228 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-catalog-content\") pod \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.044287 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wms6l\" (UniqueName: \"kubernetes.io/projected/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-kube-api-access-wms6l\") pod \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.045351 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-utilities" (OuterVolumeSpecName: "utilities") pod "8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" (UID: "8a0ef2f6-3113-478c-bb8c-9ea8e004a27d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.050680 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-kube-api-access-wms6l" (OuterVolumeSpecName: "kube-api-access-wms6l") pod "8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" (UID: "8a0ef2f6-3113-478c-bb8c-9ea8e004a27d"). InnerVolumeSpecName "kube-api-access-wms6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.085479 4804 scope.go:117] "RemoveContainer" containerID="f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.099382 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.111101 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.113553 4804 scope.go:117] "RemoveContainer" containerID="db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.116138 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.128858 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" (UID: "8a0ef2f6-3113-478c-bb8c-9ea8e004a27d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.133650 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.150168 4804 scope.go:117] "RemoveContainer" containerID="fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.150536 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-catalog-content\") pod \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " Jan 28 11:26:48 crc kubenswrapper[4804]: E0128 11:26:48.150903 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5\": container with ID starting with fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5 not found: ID does not exist" containerID="fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.150950 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5"} err="failed to get container status \"fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5\": rpc error: code = NotFound desc = could not find container \"fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5\": container with ID starting with fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5 not found: ID does not exist" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.150986 4804 scope.go:117] "RemoveContainer" containerID="f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129" Jan 28 11:26:48 crc kubenswrapper[4804]: E0128 11:26:48.154993 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129\": container with ID starting with f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129 not found: ID does not exist" containerID="f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.155035 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129"} err="failed to get container status \"f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129\": rpc error: code = NotFound desc = could not find container \"f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129\": container with ID starting with f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129 not found: ID does not exist" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.155060 4804 scope.go:117] "RemoveContainer" containerID="db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1" Jan 28 11:26:48 crc kubenswrapper[4804]: E0128 11:26:48.155375 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1\": container with ID starting with db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1 not found: ID does not exist" containerID="db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.155406 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1"} err="failed to get container status \"db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1\": rpc error: code = NotFound desc = could not find container \"db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1\": container with ID starting with db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1 not found: ID does not exist" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160192 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-trusted-ca\") pod \"bb959019-0f9d-4210-8410-6b3c00b02337\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160298 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dklnt\" (UniqueName: \"kubernetes.io/projected/b641b655-0d3e-4838-8c87-fc72873f1944-kube-api-access-dklnt\") pod \"b641b655-0d3e-4838-8c87-fc72873f1944\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160332 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qz4p\" (UniqueName: \"kubernetes.io/projected/6caae643-ab85-4628-bcb1-9c0ecc48c568-kube-api-access-4qz4p\") pod \"6caae643-ab85-4628-bcb1-9c0ecc48c568\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160371 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-utilities\") pod \"6caae643-ab85-4628-bcb1-9c0ecc48c568\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160400 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppxwf\" (UniqueName: \"kubernetes.io/projected/bb959019-0f9d-4210-8410-6b3c00b02337-kube-api-access-ppxwf\") pod \"bb959019-0f9d-4210-8410-6b3c00b02337\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160428 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-utilities\") pod \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160466 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-catalog-content\") pod \"b641b655-0d3e-4838-8c87-fc72873f1944\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160499 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-catalog-content\") pod \"6caae643-ab85-4628-bcb1-9c0ecc48c568\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160523 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-operator-metrics\") pod \"bb959019-0f9d-4210-8410-6b3c00b02337\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160576 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdvxr\" (UniqueName: \"kubernetes.io/projected/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-kube-api-access-zdvxr\") pod \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160603 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-utilities\") pod \"b641b655-0d3e-4838-8c87-fc72873f1944\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.161014 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.161037 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.161052 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wms6l\" (UniqueName: \"kubernetes.io/projected/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-kube-api-access-wms6l\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.164703 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b641b655-0d3e-4838-8c87-fc72873f1944-kube-api-access-dklnt" (OuterVolumeSpecName: "kube-api-access-dklnt") pod "b641b655-0d3e-4838-8c87-fc72873f1944" (UID: "b641b655-0d3e-4838-8c87-fc72873f1944"). InnerVolumeSpecName "kube-api-access-dklnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.165002 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-utilities" (OuterVolumeSpecName: "utilities") pod "6caae643-ab85-4628-bcb1-9c0ecc48c568" (UID: "6caae643-ab85-4628-bcb1-9c0ecc48c568"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.165631 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-utilities" (OuterVolumeSpecName: "utilities") pod "3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" (UID: "3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.166346 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "bb959019-0f9d-4210-8410-6b3c00b02337" (UID: "bb959019-0f9d-4210-8410-6b3c00b02337"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.166968 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-utilities" (OuterVolumeSpecName: "utilities") pod "b641b655-0d3e-4838-8c87-fc72873f1944" (UID: "b641b655-0d3e-4838-8c87-fc72873f1944"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.168134 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6caae643-ab85-4628-bcb1-9c0ecc48c568-kube-api-access-4qz4p" (OuterVolumeSpecName: "kube-api-access-4qz4p") pod "6caae643-ab85-4628-bcb1-9c0ecc48c568" (UID: "6caae643-ab85-4628-bcb1-9c0ecc48c568"). InnerVolumeSpecName "kube-api-access-4qz4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.169725 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-kube-api-access-zdvxr" (OuterVolumeSpecName: "kube-api-access-zdvxr") pod "3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" (UID: "3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d"). InnerVolumeSpecName "kube-api-access-zdvxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.169767 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb959019-0f9d-4210-8410-6b3c00b02337-kube-api-access-ppxwf" (OuterVolumeSpecName: "kube-api-access-ppxwf") pod "bb959019-0f9d-4210-8410-6b3c00b02337" (UID: "bb959019-0f9d-4210-8410-6b3c00b02337"). InnerVolumeSpecName "kube-api-access-ppxwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.170400 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "bb959019-0f9d-4210-8410-6b3c00b02337" (UID: "bb959019-0f9d-4210-8410-6b3c00b02337"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.202648 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6caae643-ab85-4628-bcb1-9c0ecc48c568" (UID: "6caae643-ab85-4628-bcb1-9c0ecc48c568"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.209532 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" (UID: "3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265008 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dklnt\" (UniqueName: \"kubernetes.io/projected/b641b655-0d3e-4838-8c87-fc72873f1944-kube-api-access-dklnt\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265041 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qz4p\" (UniqueName: \"kubernetes.io/projected/6caae643-ab85-4628-bcb1-9c0ecc48c568-kube-api-access-4qz4p\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265052 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265062 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppxwf\" (UniqueName: \"kubernetes.io/projected/bb959019-0f9d-4210-8410-6b3c00b02337-kube-api-access-ppxwf\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265071 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265079 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265087 4804 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265097 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdvxr\" (UniqueName: \"kubernetes.io/projected/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-kube-api-access-zdvxr\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265104 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265112 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265119 4804 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.294422 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b641b655-0d3e-4838-8c87-fc72873f1944" (UID: "b641b655-0d3e-4838-8c87-fc72873f1944"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.321412 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gw5tb"] Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.324248 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gw5tb"] Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.363077 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.366793 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.370392 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s76k6"] Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.473826 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.520422 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.521188 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569462 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569546 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569616 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569642 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569654 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569746 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569794 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569849 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569915 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.570181 4804 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.570203 4804 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.570214 4804 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.570229 4804 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.575711 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.671652 4804 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.922440 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" path="/var/lib/kubelet/pods/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d/volumes" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.923595 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.022620 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.022649 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" event={"ID":"bb959019-0f9d-4210-8410-6b3c00b02337","Type":"ContainerDied","Data":"da180074ac3e1b702af197f95701d1cff294f3e8895503fdbfbde3d61d0ef87e"} Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.022999 4804 scope.go:117] "RemoveContainer" containerID="4bceb0781d7092ea24802dae15015144fbb316afc1359d2ddad36759cb909c58" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.025090 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" event={"ID":"349fc9e3-a236-44fd-b7b9-ee08f25c58fd","Type":"ContainerStarted","Data":"9da03e5fdc5f3c0c17b1a579763363b0e575c125e01822f30376b68abbdbe2c9"} Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.025133 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" event={"ID":"349fc9e3-a236-44fd-b7b9-ee08f25c58fd","Type":"ContainerStarted","Data":"fd05b2d05ff2ac6e274fd94eb02e4e64d7931052ca41aa3d8272968ecffe0ef4"} Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.027237 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerDied","Data":"1e8bd873fb7adcc76814d0eeeb9b78c6d6981cbd42db2825d7cfc8757dac3b5e"} Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.027394 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.036509 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.037939 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerDied","Data":"b9f8fd7843e0d657401a449864e7360a08eaacd9d3a996600b88abc62b6de5e9"} Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.042237 4804 scope.go:117] "RemoveContainer" containerID="3fd567f1f3948b02442e51054aa407e5b7de7526347804594426ab16143ecc19" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.045002 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.045297 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerDied","Data":"ec04856dfe2459bdae75866159a6a9081b3f707d9e9a839eb94cb2acf0e4e3d1"} Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.046233 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" podStartSLOduration=2.046215579 podStartE2EDuration="2.046215579s" podCreationTimestamp="2026-01-28 11:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:26:49.045466467 +0000 UTC m=+284.840346461" watchObservedRunningTime="2026-01-28 11:26:49.046215579 +0000 UTC m=+284.841095573" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.048893 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.048961 4804 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b" exitCode=137 Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.049533 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.071248 4804 scope.go:117] "RemoveContainer" containerID="d94669774e7242d8b7fe429cfa0919b0f629e2465d0eed385a4b1380750d4b02" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.074939 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ml79j"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.086445 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ml79j"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.091148 4804 scope.go:117] "RemoveContainer" containerID="04c43db3e70bb20141e7892290639067d3851e183e916843eb2d0aab2b130c9a" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.094068 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hzmvb"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.101922 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hzmvb"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.106539 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b7c6"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.109984 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b7c6"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.116741 4804 scope.go:117] "RemoveContainer" containerID="631aa94b77086e59cc4974535410f633d8a570238e4eb3b9012dadf08b6ae781" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.117515 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmw4q"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.121453 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jmw4q"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.128641 4804 scope.go:117] "RemoveContainer" containerID="7725654f9e2f3db24252d95301f4512ca56872a844c3f809462e7438542a69f4" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.142060 4804 scope.go:117] "RemoveContainer" containerID="38d5811043b3f5ad798e66586c4ba52ca430539e3b5096297f2d0e1b1b72ab80" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.168279 4804 scope.go:117] "RemoveContainer" containerID="42f2eeb16ac98652bf013b7ae171fa09175e007ba579b10ded8267ad8190a2a1" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.181349 4804 scope.go:117] "RemoveContainer" containerID="ec196e8414d1104384ba418ed46e3931a8aa99482add9614aaedd0533c6a0b63" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.197315 4804 scope.go:117] "RemoveContainer" containerID="224ba74fdc92a764e31b68f322cd68766ad88b0938c015d6c3219ec78f441a34" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.211007 4804 scope.go:117] "RemoveContainer" containerID="a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.229873 4804 scope.go:117] "RemoveContainer" containerID="a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b" Jan 28 11:26:49 crc kubenswrapper[4804]: E0128 11:26:49.230367 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b\": container with ID starting with a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b not found: ID does not exist" containerID="a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.230398 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b"} err="failed to get container status \"a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b\": rpc error: code = NotFound desc = could not find container \"a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b\": container with ID starting with a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b not found: ID does not exist" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.809340 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 11:26:50 crc kubenswrapper[4804]: I0128 11:26:50.064557 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:50 crc kubenswrapper[4804]: I0128 11:26:50.074779 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:50 crc kubenswrapper[4804]: I0128 11:26:50.921974 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" path="/var/lib/kubelet/pods/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d/volumes" Jan 28 11:26:50 crc kubenswrapper[4804]: I0128 11:26:50.923499 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" path="/var/lib/kubelet/pods/6caae643-ab85-4628-bcb1-9c0ecc48c568/volumes" Jan 28 11:26:50 crc kubenswrapper[4804]: I0128 11:26:50.924138 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" path="/var/lib/kubelet/pods/b641b655-0d3e-4838-8c87-fc72873f1944/volumes" Jan 28 11:26:50 crc kubenswrapper[4804]: I0128 11:26:50.925304 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" path="/var/lib/kubelet/pods/bb959019-0f9d-4210-8410-6b3c00b02337/volumes" Jan 28 11:27:04 crc kubenswrapper[4804]: I0128 11:27:04.743733 4804 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.487784 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wbxgh"] Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488519 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488534 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488548 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488556 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488566 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488573 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488584 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488591 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488603 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488610 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488621 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488628 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488644 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488650 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488660 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488666 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488675 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488682 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488690 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488696 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488705 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488710 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488717 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488723 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488731 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488737 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488823 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488834 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488842 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488850 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488861 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.489547 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.491885 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.513522 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wbxgh"] Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.589840 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e77bd7-6a7b-4b91-b47d-61e61d157acb-utilities\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.590024 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e77bd7-6a7b-4b91-b47d-61e61d157acb-catalog-content\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.590085 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9694\" (UniqueName: \"kubernetes.io/projected/91e77bd7-6a7b-4b91-b47d-61e61d157acb-kube-api-access-d9694\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.691401 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e77bd7-6a7b-4b91-b47d-61e61d157acb-utilities\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.691476 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e77bd7-6a7b-4b91-b47d-61e61d157acb-catalog-content\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.691531 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9694\" (UniqueName: \"kubernetes.io/projected/91e77bd7-6a7b-4b91-b47d-61e61d157acb-kube-api-access-d9694\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.691915 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mfzfl"] Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.691993 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e77bd7-6a7b-4b91-b47d-61e61d157acb-utilities\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.692055 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e77bd7-6a7b-4b91-b47d-61e61d157acb-catalog-content\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.708727 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.710978 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfzfl"] Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.712370 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.721977 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9694\" (UniqueName: \"kubernetes.io/projected/91e77bd7-6a7b-4b91-b47d-61e61d157acb-kube-api-access-d9694\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.823437 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.894175 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-utilities\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.894373 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-catalog-content\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.894421 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptgdf\" (UniqueName: \"kubernetes.io/projected/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-kube-api-access-ptgdf\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.995449 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptgdf\" (UniqueName: \"kubernetes.io/projected/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-kube-api-access-ptgdf\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.995527 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-utilities\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.995573 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-catalog-content\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.996166 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-catalog-content\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.996399 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-utilities\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.013463 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptgdf\" (UniqueName: \"kubernetes.io/projected/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-kube-api-access-ptgdf\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.037986 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.194506 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wbxgh"] Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.364685 4804 generic.go:334] "Generic (PLEG): container finished" podID="91e77bd7-6a7b-4b91-b47d-61e61d157acb" containerID="f7bfcb1fa1ea45b816b10d95c5b6718c2ba8bd93e908b6478ec77a57e3d240ab" exitCode=0 Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.364723 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbxgh" event={"ID":"91e77bd7-6a7b-4b91-b47d-61e61d157acb","Type":"ContainerDied","Data":"f7bfcb1fa1ea45b816b10d95c5b6718c2ba8bd93e908b6478ec77a57e3d240ab"} Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.364744 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbxgh" event={"ID":"91e77bd7-6a7b-4b91-b47d-61e61d157acb","Type":"ContainerStarted","Data":"08a30b0f1eb7c69a3bedcee0c785b4f32de906b4ffb7be33be7d3fdf850fe06c"} Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.407857 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfzfl"] Jan 28 11:27:36 crc kubenswrapper[4804]: W0128 11:27:36.412858 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e326a9c_bf0f_4d43_87f0_f4c4e2667118.slice/crio-455712f693f026e48772fb731f0096491a8bcf7e749dbadb8c84b6d1f7d299c1 WatchSource:0}: Error finding container 455712f693f026e48772fb731f0096491a8bcf7e749dbadb8c84b6d1f7d299c1: Status 404 returned error can't find the container with id 455712f693f026e48772fb731f0096491a8bcf7e749dbadb8c84b6d1f7d299c1 Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.370786 4804 generic.go:334] "Generic (PLEG): container finished" podID="7e326a9c-bf0f-4d43-87f0-f4c4e2667118" containerID="609e18455ed5ea2a438ea430e22a3ace680e973fb7aec4a152150642a40ad467" exitCode=0 Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.370874 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzfl" event={"ID":"7e326a9c-bf0f-4d43-87f0-f4c4e2667118","Type":"ContainerDied","Data":"609e18455ed5ea2a438ea430e22a3ace680e973fb7aec4a152150642a40ad467"} Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.371178 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzfl" event={"ID":"7e326a9c-bf0f-4d43-87f0-f4c4e2667118","Type":"ContainerStarted","Data":"455712f693f026e48772fb731f0096491a8bcf7e749dbadb8c84b6d1f7d299c1"} Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.373689 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbxgh" event={"ID":"91e77bd7-6a7b-4b91-b47d-61e61d157acb","Type":"ContainerStarted","Data":"ab1ed78c1d05a6ec47e45c17d34c73abafd388ef6ca139e5f120baefc9ffeb59"} Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.881783 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hfp4x"] Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.882735 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.884399 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.892695 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hfp4x"] Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.040039 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9smm\" (UniqueName: \"kubernetes.io/projected/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-kube-api-access-q9smm\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.040157 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-catalog-content\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.040198 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-utilities\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.084155 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8n6zc"] Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.085308 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.087837 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.096431 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8n6zc"] Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.140804 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9smm\" (UniqueName: \"kubernetes.io/projected/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-kube-api-access-q9smm\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.140877 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-catalog-content\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.140935 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-utilities\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.141318 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-utilities\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.141530 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-catalog-content\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.166068 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9smm\" (UniqueName: \"kubernetes.io/projected/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-kube-api-access-q9smm\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.242775 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-catalog-content\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.242963 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-utilities\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.243034 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5ns\" (UniqueName: \"kubernetes.io/projected/477f5ec7-c491-494c-add6-a233798ffdfa-kube-api-access-jz5ns\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.285192 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.343936 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-utilities\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.344013 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz5ns\" (UniqueName: \"kubernetes.io/projected/477f5ec7-c491-494c-add6-a233798ffdfa-kube-api-access-jz5ns\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.344065 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-catalog-content\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.344717 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-catalog-content\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.344917 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-utilities\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.363726 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz5ns\" (UniqueName: \"kubernetes.io/projected/477f5ec7-c491-494c-add6-a233798ffdfa-kube-api-access-jz5ns\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.386090 4804 generic.go:334] "Generic (PLEG): container finished" podID="91e77bd7-6a7b-4b91-b47d-61e61d157acb" containerID="ab1ed78c1d05a6ec47e45c17d34c73abafd388ef6ca139e5f120baefc9ffeb59" exitCode=0 Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.386202 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbxgh" event={"ID":"91e77bd7-6a7b-4b91-b47d-61e61d157acb","Type":"ContainerDied","Data":"ab1ed78c1d05a6ec47e45c17d34c73abafd388ef6ca139e5f120baefc9ffeb59"} Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.388640 4804 generic.go:334] "Generic (PLEG): container finished" podID="7e326a9c-bf0f-4d43-87f0-f4c4e2667118" containerID="b35db1a1ff34ee952dcc074f0d6eefdc5c99af0e19ceed537d4718259de247de" exitCode=0 Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.388691 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzfl" event={"ID":"7e326a9c-bf0f-4d43-87f0-f4c4e2667118","Type":"ContainerDied","Data":"b35db1a1ff34ee952dcc074f0d6eefdc5c99af0e19ceed537d4718259de247de"} Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.408821 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.501854 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hfp4x"] Jan 28 11:27:38 crc kubenswrapper[4804]: W0128 11:27:38.510139 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64d5e8a4_00e0_4aae_988b_d10e5f36cae7.slice/crio-873e7513363f9fcae3cd3a724aba5f78273355b09948ce4f785a823d289eee7d WatchSource:0}: Error finding container 873e7513363f9fcae3cd3a724aba5f78273355b09948ce4f785a823d289eee7d: Status 404 returned error can't find the container with id 873e7513363f9fcae3cd3a724aba5f78273355b09948ce4f785a823d289eee7d Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.789786 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8n6zc"] Jan 28 11:27:38 crc kubenswrapper[4804]: W0128 11:27:38.797192 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod477f5ec7_c491_494c_add6_a233798ffdfa.slice/crio-5666d32cb1791e32eb7a0f138a32a6994ac7508322ed412acb7afd87a03dcb18 WatchSource:0}: Error finding container 5666d32cb1791e32eb7a0f138a32a6994ac7508322ed412acb7afd87a03dcb18: Status 404 returned error can't find the container with id 5666d32cb1791e32eb7a0f138a32a6994ac7508322ed412acb7afd87a03dcb18 Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.394954 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzfl" event={"ID":"7e326a9c-bf0f-4d43-87f0-f4c4e2667118","Type":"ContainerStarted","Data":"ac2864fcbdeba1e2f84d17b9ea054bc897d0e9de7a1e00ad13edbb198811ca36"} Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.398702 4804 generic.go:334] "Generic (PLEG): container finished" podID="477f5ec7-c491-494c-add6-a233798ffdfa" containerID="97869d81e8512d2767849c948a0eaf69907f795ddaf291cb6977a857a679da98" exitCode=0 Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.398767 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n6zc" event={"ID":"477f5ec7-c491-494c-add6-a233798ffdfa","Type":"ContainerDied","Data":"97869d81e8512d2767849c948a0eaf69907f795ddaf291cb6977a857a679da98"} Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.398796 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n6zc" event={"ID":"477f5ec7-c491-494c-add6-a233798ffdfa","Type":"ContainerStarted","Data":"5666d32cb1791e32eb7a0f138a32a6994ac7508322ed412acb7afd87a03dcb18"} Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.402492 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbxgh" event={"ID":"91e77bd7-6a7b-4b91-b47d-61e61d157acb","Type":"ContainerStarted","Data":"a42ac5a1159848a8a66f9af3fde7993fcb0c35fa30816a1cfb4649ebec61d084"} Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.404424 4804 generic.go:334] "Generic (PLEG): container finished" podID="64d5e8a4-00e0-4aae-988b-d10e5f36cae7" containerID="20b7980f00a6c53ee52c8489361ac28da3d88b28866ffa48c844fc6ceebb5e60" exitCode=0 Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.404457 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfp4x" event={"ID":"64d5e8a4-00e0-4aae-988b-d10e5f36cae7","Type":"ContainerDied","Data":"20b7980f00a6c53ee52c8489361ac28da3d88b28866ffa48c844fc6ceebb5e60"} Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.404478 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfp4x" event={"ID":"64d5e8a4-00e0-4aae-988b-d10e5f36cae7","Type":"ContainerStarted","Data":"873e7513363f9fcae3cd3a724aba5f78273355b09948ce4f785a823d289eee7d"} Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.418731 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mfzfl" podStartSLOduration=2.885334306 podStartE2EDuration="4.418714538s" podCreationTimestamp="2026-01-28 11:27:35 +0000 UTC" firstStartedPulling="2026-01-28 11:27:37.3721794 +0000 UTC m=+333.167059384" lastFinishedPulling="2026-01-28 11:27:38.905559632 +0000 UTC m=+334.700439616" observedRunningTime="2026-01-28 11:27:39.414323982 +0000 UTC m=+335.209203986" watchObservedRunningTime="2026-01-28 11:27:39.418714538 +0000 UTC m=+335.213594522" Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.473020 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wbxgh" podStartSLOduration=2.041442705 podStartE2EDuration="4.472994023s" podCreationTimestamp="2026-01-28 11:27:35 +0000 UTC" firstStartedPulling="2026-01-28 11:27:36.366088704 +0000 UTC m=+332.160968688" lastFinishedPulling="2026-01-28 11:27:38.797640022 +0000 UTC m=+334.592520006" observedRunningTime="2026-01-28 11:27:39.470732163 +0000 UTC m=+335.265612157" watchObservedRunningTime="2026-01-28 11:27:39.472994023 +0000 UTC m=+335.267874027" Jan 28 11:27:40 crc kubenswrapper[4804]: I0128 11:27:40.411817 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfp4x" event={"ID":"64d5e8a4-00e0-4aae-988b-d10e5f36cae7","Type":"ContainerStarted","Data":"6ae78751e4b835f46fe78c17bde8fdeb2658a62258da90def68924d61e9cc24d"} Jan 28 11:27:40 crc kubenswrapper[4804]: I0128 11:27:40.413829 4804 generic.go:334] "Generic (PLEG): container finished" podID="477f5ec7-c491-494c-add6-a233798ffdfa" containerID="5eeef8445a28c47bafd383bf532c0bbf3abc3e3acbe80741d1fb008b29abd5a7" exitCode=0 Jan 28 11:27:40 crc kubenswrapper[4804]: I0128 11:27:40.413907 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n6zc" event={"ID":"477f5ec7-c491-494c-add6-a233798ffdfa","Type":"ContainerDied","Data":"5eeef8445a28c47bafd383bf532c0bbf3abc3e3acbe80741d1fb008b29abd5a7"} Jan 28 11:27:41 crc kubenswrapper[4804]: I0128 11:27:41.422049 4804 generic.go:334] "Generic (PLEG): container finished" podID="64d5e8a4-00e0-4aae-988b-d10e5f36cae7" containerID="6ae78751e4b835f46fe78c17bde8fdeb2658a62258da90def68924d61e9cc24d" exitCode=0 Jan 28 11:27:41 crc kubenswrapper[4804]: I0128 11:27:41.422164 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfp4x" event={"ID":"64d5e8a4-00e0-4aae-988b-d10e5f36cae7","Type":"ContainerDied","Data":"6ae78751e4b835f46fe78c17bde8fdeb2658a62258da90def68924d61e9cc24d"} Jan 28 11:27:41 crc kubenswrapper[4804]: I0128 11:27:41.425004 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n6zc" event={"ID":"477f5ec7-c491-494c-add6-a233798ffdfa","Type":"ContainerStarted","Data":"8097ea45070d38453a6edb261d8ee6d04408f9d4cf265b8d012cfbbcf0aab862"} Jan 28 11:27:41 crc kubenswrapper[4804]: I0128 11:27:41.462689 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8n6zc" podStartSLOduration=2.035730719 podStartE2EDuration="3.462671757s" podCreationTimestamp="2026-01-28 11:27:38 +0000 UTC" firstStartedPulling="2026-01-28 11:27:39.399966676 +0000 UTC m=+335.194846670" lastFinishedPulling="2026-01-28 11:27:40.826907734 +0000 UTC m=+336.621787708" observedRunningTime="2026-01-28 11:27:41.45824855 +0000 UTC m=+337.253128524" watchObservedRunningTime="2026-01-28 11:27:41.462671757 +0000 UTC m=+337.257551741" Jan 28 11:27:42 crc kubenswrapper[4804]: I0128 11:27:42.582696 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:27:42 crc kubenswrapper[4804]: I0128 11:27:42.583261 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:27:44 crc kubenswrapper[4804]: I0128 11:27:44.453044 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfp4x" event={"ID":"64d5e8a4-00e0-4aae-988b-d10e5f36cae7","Type":"ContainerStarted","Data":"d454ae9d3be5e6e97b5bc793769ffafee3651468a260dcdf014b2b36201218e9"} Jan 28 11:27:44 crc kubenswrapper[4804]: I0128 11:27:44.489079 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hfp4x" podStartSLOduration=3.547584105 podStartE2EDuration="7.489038427s" podCreationTimestamp="2026-01-28 11:27:37 +0000 UTC" firstStartedPulling="2026-01-28 11:27:39.405726345 +0000 UTC m=+335.200606329" lastFinishedPulling="2026-01-28 11:27:43.347180627 +0000 UTC m=+339.142060651" observedRunningTime="2026-01-28 11:27:44.486957062 +0000 UTC m=+340.281837086" watchObservedRunningTime="2026-01-28 11:27:44.489038427 +0000 UTC m=+340.283918541" Jan 28 11:27:45 crc kubenswrapper[4804]: I0128 11:27:45.824167 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:45 crc kubenswrapper[4804]: I0128 11:27:45.824625 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:45 crc kubenswrapper[4804]: I0128 11:27:45.873157 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:46 crc kubenswrapper[4804]: I0128 11:27:46.038435 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:46 crc kubenswrapper[4804]: I0128 11:27:46.038509 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:46 crc kubenswrapper[4804]: I0128 11:27:46.075116 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:46 crc kubenswrapper[4804]: I0128 11:27:46.512855 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:46 crc kubenswrapper[4804]: I0128 11:27:46.529099 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:48 crc kubenswrapper[4804]: I0128 11:27:48.285869 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:48 crc kubenswrapper[4804]: I0128 11:27:48.285986 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:48 crc kubenswrapper[4804]: I0128 11:27:48.409096 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:48 crc kubenswrapper[4804]: I0128 11:27:48.409187 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:48 crc kubenswrapper[4804]: I0128 11:27:48.455831 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:48 crc kubenswrapper[4804]: I0128 11:27:48.523127 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:49 crc kubenswrapper[4804]: I0128 11:27:49.325422 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hfp4x" podUID="64d5e8a4-00e0-4aae-988b-d10e5f36cae7" containerName="registry-server" probeResult="failure" output=< Jan 28 11:27:49 crc kubenswrapper[4804]: timeout: failed to connect service ":50051" within 1s Jan 28 11:27:49 crc kubenswrapper[4804]: > Jan 28 11:27:58 crc kubenswrapper[4804]: I0128 11:27:58.331398 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:58 crc kubenswrapper[4804]: I0128 11:27:58.371515 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.217994 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jnbsp"] Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.219846 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.233566 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jnbsp"] Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353242 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-bound-sa-token\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353284 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9123b082-c385-4b95-b3d7-581636f5dae3-registry-certificates\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353305 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-registry-tls\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353471 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9123b082-c385-4b95-b3d7-581636f5dae3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353524 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvbdk\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-kube-api-access-jvbdk\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353639 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353679 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9123b082-c385-4b95-b3d7-581636f5dae3-trusted-ca\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353738 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9123b082-c385-4b95-b3d7-581636f5dae3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.385620 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.455353 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9123b082-c385-4b95-b3d7-581636f5dae3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.455842 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-bound-sa-token\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.455975 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9123b082-c385-4b95-b3d7-581636f5dae3-registry-certificates\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.456093 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-registry-tls\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.456206 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9123b082-c385-4b95-b3d7-581636f5dae3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.456297 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvbdk\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-kube-api-access-jvbdk\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.456421 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9123b082-c385-4b95-b3d7-581636f5dae3-trusted-ca\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.456686 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9123b082-c385-4b95-b3d7-581636f5dae3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.457350 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9123b082-c385-4b95-b3d7-581636f5dae3-registry-certificates\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.457939 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9123b082-c385-4b95-b3d7-581636f5dae3-trusted-ca\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.462993 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-registry-tls\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.463000 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9123b082-c385-4b95-b3d7-581636f5dae3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.473325 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvbdk\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-kube-api-access-jvbdk\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.476480 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-bound-sa-token\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.534943 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:07 crc kubenswrapper[4804]: I0128 11:28:07.420236 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jnbsp"] Jan 28 11:28:07 crc kubenswrapper[4804]: W0128 11:28:07.422017 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9123b082_c385_4b95_b3d7_581636f5dae3.slice/crio-c644cbbe0a4bcf135fc2541ffa05d08c12850bbf8f8731b5d5917f1edcc694d1 WatchSource:0}: Error finding container c644cbbe0a4bcf135fc2541ffa05d08c12850bbf8f8731b5d5917f1edcc694d1: Status 404 returned error can't find the container with id c644cbbe0a4bcf135fc2541ffa05d08c12850bbf8f8731b5d5917f1edcc694d1 Jan 28 11:28:08 crc kubenswrapper[4804]: I0128 11:28:08.175052 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" event={"ID":"9123b082-c385-4b95-b3d7-581636f5dae3","Type":"ContainerStarted","Data":"c644cbbe0a4bcf135fc2541ffa05d08c12850bbf8f8731b5d5917f1edcc694d1"} Jan 28 11:28:09 crc kubenswrapper[4804]: I0128 11:28:09.180577 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" event={"ID":"9123b082-c385-4b95-b3d7-581636f5dae3","Type":"ContainerStarted","Data":"d4e4ca4f31104bfe30f08bfff7f58688eba391347cfbc1478433ee3646138d47"} Jan 28 11:28:09 crc kubenswrapper[4804]: I0128 11:28:09.182197 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:09 crc kubenswrapper[4804]: I0128 11:28:09.201872 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" podStartSLOduration=3.201856104 podStartE2EDuration="3.201856104s" podCreationTimestamp="2026-01-28 11:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:28:09.198384396 +0000 UTC m=+364.993264380" watchObservedRunningTime="2026-01-28 11:28:09.201856104 +0000 UTC m=+364.996736088" Jan 28 11:28:12 crc kubenswrapper[4804]: I0128 11:28:12.582531 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:28:12 crc kubenswrapper[4804]: I0128 11:28:12.582607 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:28:26 crc kubenswrapper[4804]: I0128 11:28:26.539701 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:26 crc kubenswrapper[4804]: I0128 11:28:26.602493 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-src4s"] Jan 28 11:28:42 crc kubenswrapper[4804]: I0128 11:28:42.582306 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:28:42 crc kubenswrapper[4804]: I0128 11:28:42.583086 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:28:42 crc kubenswrapper[4804]: I0128 11:28:42.583171 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:28:42 crc kubenswrapper[4804]: I0128 11:28:42.583832 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6bd6423ac842a17ff5659b7f0672fd055e5689dc54e8deaa66167b5157cd76e"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:28:42 crc kubenswrapper[4804]: I0128 11:28:42.583908 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://d6bd6423ac842a17ff5659b7f0672fd055e5689dc54e8deaa66167b5157cd76e" gracePeriod=600 Jan 28 11:28:43 crc kubenswrapper[4804]: I0128 11:28:43.391608 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="d6bd6423ac842a17ff5659b7f0672fd055e5689dc54e8deaa66167b5157cd76e" exitCode=0 Jan 28 11:28:43 crc kubenswrapper[4804]: I0128 11:28:43.391700 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"d6bd6423ac842a17ff5659b7f0672fd055e5689dc54e8deaa66167b5157cd76e"} Jan 28 11:28:43 crc kubenswrapper[4804]: I0128 11:28:43.392277 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"c7b5b9b6b8ef791eab510f91481d8192b718ad6748767af1fa3c3c5a88adba6c"} Jan 28 11:28:43 crc kubenswrapper[4804]: I0128 11:28:43.392313 4804 scope.go:117] "RemoveContainer" containerID="3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5" Jan 28 11:28:51 crc kubenswrapper[4804]: I0128 11:28:51.644710 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" podUID="436e3017-a787-4e60-97cd-7cc0cdd47a2d" containerName="registry" containerID="cri-o://83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191" gracePeriod=30 Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.076252 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.161719 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/436e3017-a787-4e60-97cd-7cc0cdd47a2d-ca-trust-extracted\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.161787 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-bound-sa-token\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.161849 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-tls\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.161962 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/436e3017-a787-4e60-97cd-7cc0cdd47a2d-installation-pull-secrets\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.162034 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnf5b\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-kube-api-access-mnf5b\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.162092 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-trusted-ca\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.163034 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.163468 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.163683 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-certificates\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.164119 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.164824 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.173369 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436e3017-a787-4e60-97cd-7cc0cdd47a2d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.174032 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.174290 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-kube-api-access-mnf5b" (OuterVolumeSpecName: "kube-api-access-mnf5b") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "kube-api-access-mnf5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.174740 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.178706 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.201097 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/436e3017-a787-4e60-97cd-7cc0cdd47a2d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.264935 4804 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.264984 4804 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/436e3017-a787-4e60-97cd-7cc0cdd47a2d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.265008 4804 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.265022 4804 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.265036 4804 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/436e3017-a787-4e60-97cd-7cc0cdd47a2d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.265049 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnf5b\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-kube-api-access-mnf5b\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.453255 4804 generic.go:334] "Generic (PLEG): container finished" podID="436e3017-a787-4e60-97cd-7cc0cdd47a2d" containerID="83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191" exitCode=0 Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.453305 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" event={"ID":"436e3017-a787-4e60-97cd-7cc0cdd47a2d","Type":"ContainerDied","Data":"83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191"} Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.453337 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" event={"ID":"436e3017-a787-4e60-97cd-7cc0cdd47a2d","Type":"ContainerDied","Data":"21c407385a0e63e468749b798e82d759e0bd8cab55527e3595f2c32049181c1c"} Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.453357 4804 scope.go:117] "RemoveContainer" containerID="83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.453478 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.475209 4804 scope.go:117] "RemoveContainer" containerID="83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191" Jan 28 11:28:52 crc kubenswrapper[4804]: E0128 11:28:52.477196 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191\": container with ID starting with 83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191 not found: ID does not exist" containerID="83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.477232 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191"} err="failed to get container status \"83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191\": rpc error: code = NotFound desc = could not find container \"83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191\": container with ID starting with 83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191 not found: ID does not exist" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.485047 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-src4s"] Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.490427 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-src4s"] Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.923213 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="436e3017-a787-4e60-97cd-7cc0cdd47a2d" path="/var/lib/kubelet/pods/436e3017-a787-4e60-97cd-7cc0cdd47a2d/volumes" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.181554 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5"] Jan 28 11:30:00 crc kubenswrapper[4804]: E0128 11:30:00.182408 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436e3017-a787-4e60-97cd-7cc0cdd47a2d" containerName="registry" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.182426 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="436e3017-a787-4e60-97cd-7cc0cdd47a2d" containerName="registry" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.182559 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="436e3017-a787-4e60-97cd-7cc0cdd47a2d" containerName="registry" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.183025 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.185682 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.189372 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.191205 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5"] Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.337137 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83929dab-2f27-41a0-aaea-ec500ff4b6e7-secret-volume\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.337319 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83929dab-2f27-41a0-aaea-ec500ff4b6e7-config-volume\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.337351 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqx8n\" (UniqueName: \"kubernetes.io/projected/83929dab-2f27-41a0-aaea-ec500ff4b6e7-kube-api-access-fqx8n\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.438493 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83929dab-2f27-41a0-aaea-ec500ff4b6e7-secret-volume\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.438572 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83929dab-2f27-41a0-aaea-ec500ff4b6e7-config-volume\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.438594 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqx8n\" (UniqueName: \"kubernetes.io/projected/83929dab-2f27-41a0-aaea-ec500ff4b6e7-kube-api-access-fqx8n\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.440976 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83929dab-2f27-41a0-aaea-ec500ff4b6e7-config-volume\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.447033 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83929dab-2f27-41a0-aaea-ec500ff4b6e7-secret-volume\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.453740 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqx8n\" (UniqueName: \"kubernetes.io/projected/83929dab-2f27-41a0-aaea-ec500ff4b6e7-kube-api-access-fqx8n\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.509100 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.704287 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5"] Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.876236 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" event={"ID":"83929dab-2f27-41a0-aaea-ec500ff4b6e7","Type":"ContainerStarted","Data":"647a49fa2b0ef181a7c4caad26f72973e736662092d9439165eb23246f60d551"} Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.876569 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" event={"ID":"83929dab-2f27-41a0-aaea-ec500ff4b6e7","Type":"ContainerStarted","Data":"6757e84d2e7c8383064f3a041216b2a08f26224137009b805ed7b77f7c0e10c3"} Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.894390 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" podStartSLOduration=0.894375044 podStartE2EDuration="894.375044ms" podCreationTimestamp="2026-01-28 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:30:00.891997898 +0000 UTC m=+476.686877892" watchObservedRunningTime="2026-01-28 11:30:00.894375044 +0000 UTC m=+476.689255038" Jan 28 11:30:01 crc kubenswrapper[4804]: I0128 11:30:01.883987 4804 generic.go:334] "Generic (PLEG): container finished" podID="83929dab-2f27-41a0-aaea-ec500ff4b6e7" containerID="647a49fa2b0ef181a7c4caad26f72973e736662092d9439165eb23246f60d551" exitCode=0 Jan 28 11:30:01 crc kubenswrapper[4804]: I0128 11:30:01.884042 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" event={"ID":"83929dab-2f27-41a0-aaea-ec500ff4b6e7","Type":"ContainerDied","Data":"647a49fa2b0ef181a7c4caad26f72973e736662092d9439165eb23246f60d551"} Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.120833 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.275406 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83929dab-2f27-41a0-aaea-ec500ff4b6e7-config-volume\") pod \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.275514 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83929dab-2f27-41a0-aaea-ec500ff4b6e7-secret-volume\") pod \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.275561 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqx8n\" (UniqueName: \"kubernetes.io/projected/83929dab-2f27-41a0-aaea-ec500ff4b6e7-kube-api-access-fqx8n\") pod \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.276416 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83929dab-2f27-41a0-aaea-ec500ff4b6e7-config-volume" (OuterVolumeSpecName: "config-volume") pod "83929dab-2f27-41a0-aaea-ec500ff4b6e7" (UID: "83929dab-2f27-41a0-aaea-ec500ff4b6e7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.281441 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83929dab-2f27-41a0-aaea-ec500ff4b6e7-kube-api-access-fqx8n" (OuterVolumeSpecName: "kube-api-access-fqx8n") pod "83929dab-2f27-41a0-aaea-ec500ff4b6e7" (UID: "83929dab-2f27-41a0-aaea-ec500ff4b6e7"). InnerVolumeSpecName "kube-api-access-fqx8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.281524 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83929dab-2f27-41a0-aaea-ec500ff4b6e7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "83929dab-2f27-41a0-aaea-ec500ff4b6e7" (UID: "83929dab-2f27-41a0-aaea-ec500ff4b6e7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.377821 4804 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83929dab-2f27-41a0-aaea-ec500ff4b6e7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.377858 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqx8n\" (UniqueName: \"kubernetes.io/projected/83929dab-2f27-41a0-aaea-ec500ff4b6e7-kube-api-access-fqx8n\") on node \"crc\" DevicePath \"\"" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.377869 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83929dab-2f27-41a0-aaea-ec500ff4b6e7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.898439 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" event={"ID":"83929dab-2f27-41a0-aaea-ec500ff4b6e7","Type":"ContainerDied","Data":"6757e84d2e7c8383064f3a041216b2a08f26224137009b805ed7b77f7c0e10c3"} Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.898669 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6757e84d2e7c8383064f3a041216b2a08f26224137009b805ed7b77f7c0e10c3" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.898500 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:42 crc kubenswrapper[4804]: I0128 11:30:42.582244 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:30:42 crc kubenswrapper[4804]: I0128 11:30:42.582825 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:31:12 crc kubenswrapper[4804]: I0128 11:31:12.582091 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:31:12 crc kubenswrapper[4804]: I0128 11:31:12.583952 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:31:42 crc kubenswrapper[4804]: I0128 11:31:42.582081 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:31:42 crc kubenswrapper[4804]: I0128 11:31:42.582680 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:31:42 crc kubenswrapper[4804]: I0128 11:31:42.582725 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:31:42 crc kubenswrapper[4804]: I0128 11:31:42.583257 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7b5b9b6b8ef791eab510f91481d8192b718ad6748767af1fa3c3c5a88adba6c"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:31:42 crc kubenswrapper[4804]: I0128 11:31:42.583311 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://c7b5b9b6b8ef791eab510f91481d8192b718ad6748767af1fa3c3c5a88adba6c" gracePeriod=600 Jan 28 11:31:43 crc kubenswrapper[4804]: I0128 11:31:43.480436 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="c7b5b9b6b8ef791eab510f91481d8192b718ad6748767af1fa3c3c5a88adba6c" exitCode=0 Jan 28 11:31:43 crc kubenswrapper[4804]: I0128 11:31:43.480523 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"c7b5b9b6b8ef791eab510f91481d8192b718ad6748767af1fa3c3c5a88adba6c"} Jan 28 11:31:43 crc kubenswrapper[4804]: I0128 11:31:43.480765 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"493f3a58ce9c84e61c12f35c4be8ff28af2862f186b7fdf44e3a4a848a20107b"} Jan 28 11:31:43 crc kubenswrapper[4804]: I0128 11:31:43.480789 4804 scope.go:117] "RemoveContainer" containerID="d6bd6423ac842a17ff5659b7f0672fd055e5689dc54e8deaa66167b5157cd76e" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.236917 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-24gvs"] Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.238700 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-controller" containerID="cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.239176 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="sbdb" containerID="cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.239295 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="nbdb" containerID="cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.239395 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="northd" containerID="cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.239487 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.239576 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-node" containerID="cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.239663 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-acl-logging" containerID="cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.320620 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" containerID="cri-o://178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.586404 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/3.log" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.589282 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovn-acl-logging/0.log" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.590013 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovn-controller/0.log" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.592465 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643283 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6qqcq"] Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643525 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-node" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643546 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-node" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643561 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="sbdb" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643570 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="sbdb" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643580 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kubecfg-setup" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643588 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kubecfg-setup" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643595 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643602 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643612 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643619 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643628 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643637 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643650 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-acl-logging" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643659 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-acl-logging" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643671 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="nbdb" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643677 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="nbdb" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643688 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643696 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643705 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643712 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643723 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643730 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643743 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="northd" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643750 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="northd" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643760 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83929dab-2f27-41a0-aaea-ec500ff4b6e7" containerName="collect-profiles" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643768 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="83929dab-2f27-41a0-aaea-ec500ff4b6e7" containerName="collect-profiles" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643870 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="83929dab-2f27-41a0-aaea-ec500ff4b6e7" containerName="collect-profiles" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643885 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643911 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643919 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-node" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643927 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643935 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="northd" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643944 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="sbdb" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643951 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643957 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-acl-logging" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643965 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="nbdb" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643972 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.644056 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.644064 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.644147 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.644332 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.645732 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.716915 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-log-socket\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.716985 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-systemd\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717010 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-systemd-units\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717058 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717058 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-log-socket" (OuterVolumeSpecName: "log-socket") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717150 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-bin\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717207 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717449 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55hnp\" (UniqueName: \"kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717498 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-ovn\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717525 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-etc-openvswitch\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717552 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717585 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-ovn-kubernetes\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717612 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717600 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717638 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-kubelet\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717667 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717696 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-netns\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717721 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-var-lib-openvswitch\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717753 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717776 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717800 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-netd\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717819 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-slash\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717839 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-openvswitch\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717868 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-node-log\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718049 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-cni-netd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718083 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-kubelet\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718102 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718127 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-env-overrides\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718155 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-log-socket\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718187 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718205 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-node-log\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718229 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718257 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/241322ad-bbc4-487d-9bd6-58659d5b9882-ovn-node-metrics-cert\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718278 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-etc-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717667 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718300 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-ovnkube-config\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717676 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718221 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718247 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718271 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718295 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-node-log" (OuterVolumeSpecName: "node-log") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718367 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718395 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718440 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-slash" (OuterVolumeSpecName: "host-slash") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718469 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718494 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718332 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-var-lib-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718534 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbfqd\" (UniqueName: \"kubernetes.io/projected/241322ad-bbc4-487d-9bd6-58659d5b9882-kube-api-access-wbfqd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718562 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-systemd-units\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718582 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-ovnkube-script-lib\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718603 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-run-netns\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718629 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718633 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-ovn\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718722 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-systemd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718793 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-slash\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718841 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-cni-bin\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718868 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718942 4804 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718955 4804 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718966 4804 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718976 4804 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718985 4804 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718993 4804 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719002 4804 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719011 4804 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719020 4804 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719037 4804 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719048 4804 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719056 4804 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719067 4804 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719090 4804 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719103 4804 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719116 4804 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.723539 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp" (OuterVolumeSpecName: "kube-api-access-55hnp") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "kube-api-access-55hnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.723816 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.732200 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820222 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-node-log\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820365 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-node-log\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820646 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820699 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820742 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820781 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/241322ad-bbc4-487d-9bd6-58659d5b9882-ovn-node-metrics-cert\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820809 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-etc-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820826 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-ovnkube-config\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820873 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-var-lib-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820911 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-etc-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820925 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbfqd\" (UniqueName: \"kubernetes.io/projected/241322ad-bbc4-487d-9bd6-58659d5b9882-kube-api-access-wbfqd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820981 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-systemd-units\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821001 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-ovnkube-script-lib\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821033 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-run-netns\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821074 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-ovn\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821102 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-systemd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821135 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-slash\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821166 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-cni-bin\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821199 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-cni-netd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821210 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-run-netns\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821230 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-kubelet\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821253 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-systemd-units\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821253 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821277 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821295 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-env-overrides\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821310 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-ovn\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821334 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-log-socket\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821339 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-systemd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821365 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-slash\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821405 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-kubelet\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821544 4804 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821570 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-log-socket\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821738 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821801 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-var-lib-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821831 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55hnp\" (UniqueName: \"kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821804 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-cni-bin\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821837 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-cni-netd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821929 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821952 4804 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.822161 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-env-overrides\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.822259 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-ovnkube-script-lib\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.822630 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-ovnkube-config\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.826370 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/241322ad-bbc4-487d-9bd6-58659d5b9882-ovn-node-metrics-cert\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.843997 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbfqd\" (UniqueName: \"kubernetes.io/projected/241322ad-bbc4-487d-9bd6-58659d5b9882-kube-api-access-wbfqd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.963861 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.172283 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/3.log" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.174511 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovn-acl-logging/0.log" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.174979 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovn-controller/0.log" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175314 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175343 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175353 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175363 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175372 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175379 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175387 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" exitCode=143 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175394 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" exitCode=143 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175439 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175470 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175485 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175497 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175508 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175519 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175533 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175545 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175552 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175558 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175563 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175570 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175576 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175583 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175589 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175596 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175606 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175615 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175623 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175629 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175635 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175643 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175650 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175657 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175664 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175671 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175680 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175691 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175699 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175706 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175713 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175720 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175727 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175733 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175740 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175746 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175753 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175761 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"008989ec311365ac3135e782553f5de3886fb749e9f1bd87d34281455159c3df"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175771 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175779 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175786 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175793 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175799 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175805 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175811 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175817 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175824 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175830 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175846 4804 scope.go:117] "RemoveContainer" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.176015 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.178981 4804 generic.go:334] "Generic (PLEG): container finished" podID="241322ad-bbc4-487d-9bd6-58659d5b9882" containerID="a1da0c80ef0c07fe35e93d7bc475becacbeafb7b7d255d553ad8e0602eeda221" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.179063 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerDied","Data":"a1da0c80ef0c07fe35e93d7bc475becacbeafb7b7d255d553ad8e0602eeda221"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.179108 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"45240fb9064957fadccc3ca7bd1954a047d59e304e671c2c3ffcfcb98b1d6d6c"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.183816 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/2.log" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.185594 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/1.log" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.185764 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerDied","Data":"c01bb0098ca9990666b7c354aacae06dac49b570cdc5308064b10a0988abe4cb"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.185652 4804 generic.go:334] "Generic (PLEG): container finished" podID="735b7edc-6f8b-4f5f-a9ca-11964dd78266" containerID="c01bb0098ca9990666b7c354aacae06dac49b570cdc5308064b10a0988abe4cb" exitCode=2 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.185851 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.186468 4804 scope.go:117] "RemoveContainer" containerID="c01bb0098ca9990666b7c354aacae06dac49b570cdc5308064b10a0988abe4cb" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.186792 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lqqmt_openshift-multus(735b7edc-6f8b-4f5f-a9ca-11964dd78266)\"" pod="openshift-multus/multus-lqqmt" podUID="735b7edc-6f8b-4f5f-a9ca-11964dd78266" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.213130 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.238701 4804 scope.go:117] "RemoveContainer" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.264298 4804 scope.go:117] "RemoveContainer" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.276184 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-24gvs"] Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.283986 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-24gvs"] Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.289069 4804 scope.go:117] "RemoveContainer" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.303316 4804 scope.go:117] "RemoveContainer" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.316010 4804 scope.go:117] "RemoveContainer" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.331133 4804 scope.go:117] "RemoveContainer" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.346646 4804 scope.go:117] "RemoveContainer" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.431349 4804 scope.go:117] "RemoveContainer" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.464544 4804 scope.go:117] "RemoveContainer" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.465061 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": container with ID starting with 178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d not found: ID does not exist" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.465106 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} err="failed to get container status \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": rpc error: code = NotFound desc = could not find container \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": container with ID starting with 178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.465138 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.465630 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": container with ID starting with 3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e not found: ID does not exist" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.465671 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} err="failed to get container status \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": rpc error: code = NotFound desc = could not find container \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": container with ID starting with 3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.465697 4804 scope.go:117] "RemoveContainer" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.466045 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": container with ID starting with 895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce not found: ID does not exist" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.466071 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} err="failed to get container status \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": rpc error: code = NotFound desc = could not find container \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": container with ID starting with 895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.466088 4804 scope.go:117] "RemoveContainer" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.466413 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": container with ID starting with a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18 not found: ID does not exist" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.466441 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} err="failed to get container status \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": rpc error: code = NotFound desc = could not find container \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": container with ID starting with a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.466476 4804 scope.go:117] "RemoveContainer" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.466784 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": container with ID starting with 035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897 not found: ID does not exist" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.466809 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} err="failed to get container status \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": rpc error: code = NotFound desc = could not find container \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": container with ID starting with 035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.466823 4804 scope.go:117] "RemoveContainer" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.467116 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": container with ID starting with 12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c not found: ID does not exist" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.467144 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} err="failed to get container status \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": rpc error: code = NotFound desc = could not find container \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": container with ID starting with 12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.467162 4804 scope.go:117] "RemoveContainer" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.467467 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": container with ID starting with d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e not found: ID does not exist" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.467492 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} err="failed to get container status \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": rpc error: code = NotFound desc = could not find container \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": container with ID starting with d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.467506 4804 scope.go:117] "RemoveContainer" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.467823 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": container with ID starting with 3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc not found: ID does not exist" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.467845 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} err="failed to get container status \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": rpc error: code = NotFound desc = could not find container \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": container with ID starting with 3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.467857 4804 scope.go:117] "RemoveContainer" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.468188 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": container with ID starting with e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6 not found: ID does not exist" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.468213 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} err="failed to get container status \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": rpc error: code = NotFound desc = could not find container \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": container with ID starting with e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.468230 4804 scope.go:117] "RemoveContainer" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.468476 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": container with ID starting with 048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03 not found: ID does not exist" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.468501 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} err="failed to get container status \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": rpc error: code = NotFound desc = could not find container \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": container with ID starting with 048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.468517 4804 scope.go:117] "RemoveContainer" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.468775 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} err="failed to get container status \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": rpc error: code = NotFound desc = could not find container \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": container with ID starting with 178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.468794 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.469052 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} err="failed to get container status \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": rpc error: code = NotFound desc = could not find container \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": container with ID starting with 3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.469075 4804 scope.go:117] "RemoveContainer" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.470206 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} err="failed to get container status \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": rpc error: code = NotFound desc = could not find container \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": container with ID starting with 895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.470227 4804 scope.go:117] "RemoveContainer" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.470652 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} err="failed to get container status \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": rpc error: code = NotFound desc = could not find container \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": container with ID starting with a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.470683 4804 scope.go:117] "RemoveContainer" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.470978 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} err="failed to get container status \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": rpc error: code = NotFound desc = could not find container \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": container with ID starting with 035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471003 4804 scope.go:117] "RemoveContainer" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471262 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} err="failed to get container status \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": rpc error: code = NotFound desc = could not find container \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": container with ID starting with 12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471294 4804 scope.go:117] "RemoveContainer" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471540 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} err="failed to get container status \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": rpc error: code = NotFound desc = could not find container \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": container with ID starting with d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471564 4804 scope.go:117] "RemoveContainer" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471906 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} err="failed to get container status \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": rpc error: code = NotFound desc = could not find container \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": container with ID starting with 3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471935 4804 scope.go:117] "RemoveContainer" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.472216 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} err="failed to get container status \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": rpc error: code = NotFound desc = could not find container \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": container with ID starting with e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.472236 4804 scope.go:117] "RemoveContainer" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.472477 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} err="failed to get container status \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": rpc error: code = NotFound desc = could not find container \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": container with ID starting with 048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.472499 4804 scope.go:117] "RemoveContainer" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.472846 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} err="failed to get container status \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": rpc error: code = NotFound desc = could not find container \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": container with ID starting with 178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.472867 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.473203 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} err="failed to get container status \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": rpc error: code = NotFound desc = could not find container \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": container with ID starting with 3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.473227 4804 scope.go:117] "RemoveContainer" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.473538 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} err="failed to get container status \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": rpc error: code = NotFound desc = could not find container \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": container with ID starting with 895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.473560 4804 scope.go:117] "RemoveContainer" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.473900 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} err="failed to get container status \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": rpc error: code = NotFound desc = could not find container \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": container with ID starting with a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.473918 4804 scope.go:117] "RemoveContainer" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.474257 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} err="failed to get container status \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": rpc error: code = NotFound desc = could not find container \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": container with ID starting with 035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.474283 4804 scope.go:117] "RemoveContainer" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.474620 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} err="failed to get container status \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": rpc error: code = NotFound desc = could not find container \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": container with ID starting with 12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.474641 4804 scope.go:117] "RemoveContainer" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.474929 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} err="failed to get container status \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": rpc error: code = NotFound desc = could not find container \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": container with ID starting with d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.474947 4804 scope.go:117] "RemoveContainer" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.475412 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} err="failed to get container status \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": rpc error: code = NotFound desc = could not find container \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": container with ID starting with 3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.475438 4804 scope.go:117] "RemoveContainer" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.475690 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} err="failed to get container status \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": rpc error: code = NotFound desc = could not find container \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": container with ID starting with e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.475709 4804 scope.go:117] "RemoveContainer" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.475975 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} err="failed to get container status \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": rpc error: code = NotFound desc = could not find container \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": container with ID starting with 048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476000 4804 scope.go:117] "RemoveContainer" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476294 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} err="failed to get container status \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": rpc error: code = NotFound desc = could not find container \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": container with ID starting with 178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476314 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476612 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} err="failed to get container status \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": rpc error: code = NotFound desc = could not find container \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": container with ID starting with 3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476637 4804 scope.go:117] "RemoveContainer" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476922 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} err="failed to get container status \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": rpc error: code = NotFound desc = could not find container \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": container with ID starting with 895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476944 4804 scope.go:117] "RemoveContainer" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.477184 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} err="failed to get container status \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": rpc error: code = NotFound desc = could not find container \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": container with ID starting with a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.477206 4804 scope.go:117] "RemoveContainer" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.477532 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} err="failed to get container status \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": rpc error: code = NotFound desc = could not find container \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": container with ID starting with 035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.477558 4804 scope.go:117] "RemoveContainer" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.477937 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} err="failed to get container status \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": rpc error: code = NotFound desc = could not find container \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": container with ID starting with 12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.477964 4804 scope.go:117] "RemoveContainer" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.478315 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} err="failed to get container status \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": rpc error: code = NotFound desc = could not find container \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": container with ID starting with d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.478336 4804 scope.go:117] "RemoveContainer" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.478755 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} err="failed to get container status \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": rpc error: code = NotFound desc = could not find container \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": container with ID starting with 3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.478773 4804 scope.go:117] "RemoveContainer" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.479026 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} err="failed to get container status \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": rpc error: code = NotFound desc = could not find container \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": container with ID starting with e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.479044 4804 scope.go:117] "RemoveContainer" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.479322 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} err="failed to get container status \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": rpc error: code = NotFound desc = could not find container \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": container with ID starting with 048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.479386 4804 scope.go:117] "RemoveContainer" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.479699 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} err="failed to get container status \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": rpc error: code = NotFound desc = could not find container \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": container with ID starting with 178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d not found: ID does not exist" Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.195847 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"55ecfab67e305dc5b6e7f4356decbf27f746c94f1e55de297c28a2cb996f7115"} Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.196327 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"c0bc493ac5246614bb4596907df43ca5dc092f99c052343d34e82e143d947a3e"} Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.196341 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"d363d36ca9b8f11ef135fc30d9a6046a4ad1675b73b29f09f3fd652a4e8f08fb"} Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.196349 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"755b8db2608bc72501aaa8b2ba24273cfbab497f28f77e54eae374aba0bf6124"} Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.196358 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"d960bb6363b20e2aedc4aefbc0776a728e94f3965222c680f884277aeda30e09"} Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.196366 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"d6891089a1bb32eb1a00b1778f8b61aef0857d487f368d4562a1e82f83e797b5"} Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.922402 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" path="/var/lib/kubelet/pods/686039c6-ae16-45ac-bb9f-4c39d57d6c80/volumes" Jan 28 11:33:29 crc kubenswrapper[4804]: I0128 11:33:29.220154 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"ee57ccad0a12e421234adaa104af5b6c9040b0177132f7b4d9e9dc24b35db9d6"} Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.848111 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-g7rhm"] Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.849138 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.851642 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.851824 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.852175 4804 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-j98lp" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.852407 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.894733 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-crc-storage\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.894780 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prwlx\" (UniqueName: \"kubernetes.io/projected/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-kube-api-access-prwlx\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.894827 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-node-mnt\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.995450 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-node-mnt\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.995547 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-crc-storage\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.995571 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prwlx\" (UniqueName: \"kubernetes.io/projected/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-kube-api-access-prwlx\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.996217 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-node-mnt\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.997204 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-crc-storage\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.015343 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prwlx\" (UniqueName: \"kubernetes.io/projected/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-kube-api-access-prwlx\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.168829 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.189793 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-g7rhm"] Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.212966 4804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(877dcc620883edf43c3f6bb78b1c90529bbe1516d03e9847a916cb2aa13817ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.213136 4804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(877dcc620883edf43c3f6bb78b1c90529bbe1516d03e9847a916cb2aa13817ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.213167 4804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(877dcc620883edf43c3f6bb78b1c90529bbe1516d03e9847a916cb2aa13817ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.213296 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-g7rhm_crc-storage(2682d435-ca9a-4a86-ba99-c4dd6e59a5f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-g7rhm_crc-storage(2682d435-ca9a-4a86-ba99-c4dd6e59a5f5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(877dcc620883edf43c3f6bb78b1c90529bbe1516d03e9847a916cb2aa13817ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-g7rhm" podUID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.234524 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"f5fc1178ecb5956f32ca89dd1eec5158503b804438c1f3c9066dfa6487876bb8"} Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.234546 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.234902 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.235189 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.235231 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.235657 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.267966 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.273656 4804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(39ffd1e42142b8265eb8a0cb78a1108d03b47aac64d8a617402a424c7fc4ce44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.273713 4804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(39ffd1e42142b8265eb8a0cb78a1108d03b47aac64d8a617402a424c7fc4ce44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.273733 4804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(39ffd1e42142b8265eb8a0cb78a1108d03b47aac64d8a617402a424c7fc4ce44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.273771 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-g7rhm_crc-storage(2682d435-ca9a-4a86-ba99-c4dd6e59a5f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-g7rhm_crc-storage(2682d435-ca9a-4a86-ba99-c4dd6e59a5f5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(39ffd1e42142b8265eb8a0cb78a1108d03b47aac64d8a617402a424c7fc4ce44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-g7rhm" podUID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.277761 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.285992 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" podStartSLOduration=7.285960839 podStartE2EDuration="7.285960839s" podCreationTimestamp="2026-01-28 11:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:33:31.275429028 +0000 UTC m=+687.070309012" watchObservedRunningTime="2026-01-28 11:33:31.285960839 +0000 UTC m=+687.080840863" Jan 28 11:33:36 crc kubenswrapper[4804]: I0128 11:33:36.918634 4804 scope.go:117] "RemoveContainer" containerID="c01bb0098ca9990666b7c354aacae06dac49b570cdc5308064b10a0988abe4cb" Jan 28 11:33:36 crc kubenswrapper[4804]: E0128 11:33:36.921626 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lqqmt_openshift-multus(735b7edc-6f8b-4f5f-a9ca-11964dd78266)\"" pod="openshift-multus/multus-lqqmt" podUID="735b7edc-6f8b-4f5f-a9ca-11964dd78266" Jan 28 11:33:41 crc kubenswrapper[4804]: I0128 11:33:41.915204 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:41 crc kubenswrapper[4804]: I0128 11:33:41.916011 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:41 crc kubenswrapper[4804]: E0128 11:33:41.948795 4804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(9133d83afa5c0ee89faaca901c4de75862163d34e3b94ea13649948a06a781a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 11:33:41 crc kubenswrapper[4804]: E0128 11:33:41.948906 4804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(9133d83afa5c0ee89faaca901c4de75862163d34e3b94ea13649948a06a781a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:41 crc kubenswrapper[4804]: E0128 11:33:41.948930 4804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(9133d83afa5c0ee89faaca901c4de75862163d34e3b94ea13649948a06a781a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:41 crc kubenswrapper[4804]: E0128 11:33:41.948977 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-g7rhm_crc-storage(2682d435-ca9a-4a86-ba99-c4dd6e59a5f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-g7rhm_crc-storage(2682d435-ca9a-4a86-ba99-c4dd6e59a5f5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(9133d83afa5c0ee89faaca901c4de75862163d34e3b94ea13649948a06a781a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-g7rhm" podUID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" Jan 28 11:33:42 crc kubenswrapper[4804]: I0128 11:33:42.582316 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:33:42 crc kubenswrapper[4804]: I0128 11:33:42.582675 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:33:48 crc kubenswrapper[4804]: I0128 11:33:48.914745 4804 scope.go:117] "RemoveContainer" containerID="c01bb0098ca9990666b7c354aacae06dac49b570cdc5308064b10a0988abe4cb" Jan 28 11:33:49 crc kubenswrapper[4804]: I0128 11:33:49.337489 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/2.log" Jan 28 11:33:49 crc kubenswrapper[4804]: I0128 11:33:49.338132 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/1.log" Jan 28 11:33:49 crc kubenswrapper[4804]: I0128 11:33:49.338180 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerStarted","Data":"22513089ed214da21f747da0505b2509c9785cf6745ef9c501eae0f5493cb868"} Jan 28 11:33:54 crc kubenswrapper[4804]: I0128 11:33:54.990736 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:55 crc kubenswrapper[4804]: I0128 11:33:55.914246 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:55 crc kubenswrapper[4804]: I0128 11:33:55.914652 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:56 crc kubenswrapper[4804]: I0128 11:33:56.095846 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-g7rhm"] Jan 28 11:33:56 crc kubenswrapper[4804]: I0128 11:33:56.103776 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 11:33:56 crc kubenswrapper[4804]: I0128 11:33:56.385028 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-g7rhm" event={"ID":"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5","Type":"ContainerStarted","Data":"5155fcfd0208a1f326202e74385175543df4236cb8aaf7939b68b4fedfc0f2e6"} Jan 28 11:33:57 crc kubenswrapper[4804]: I0128 11:33:57.392027 4804 generic.go:334] "Generic (PLEG): container finished" podID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" containerID="8bb0035f4e5fd8a32c41341d09445e829b7957527742ab3149892f6f0d0302e0" exitCode=0 Jan 28 11:33:57 crc kubenswrapper[4804]: I0128 11:33:57.392099 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-g7rhm" event={"ID":"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5","Type":"ContainerDied","Data":"8bb0035f4e5fd8a32c41341d09445e829b7957527742ab3149892f6f0d0302e0"} Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.663539 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.785682 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-crc-storage\") pod \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.785724 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-node-mnt\") pod \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.785784 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prwlx\" (UniqueName: \"kubernetes.io/projected/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-kube-api-access-prwlx\") pod \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.786060 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" (UID: "2682d435-ca9a-4a86-ba99-c4dd6e59a5f5"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.786531 4804 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.793510 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-kube-api-access-prwlx" (OuterVolumeSpecName: "kube-api-access-prwlx") pod "2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" (UID: "2682d435-ca9a-4a86-ba99-c4dd6e59a5f5"). InnerVolumeSpecName "kube-api-access-prwlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.806444 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" (UID: "2682d435-ca9a-4a86-ba99-c4dd6e59a5f5"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.887436 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prwlx\" (UniqueName: \"kubernetes.io/projected/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-kube-api-access-prwlx\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.887471 4804 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:59 crc kubenswrapper[4804]: I0128 11:33:59.404957 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-g7rhm" event={"ID":"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5","Type":"ContainerDied","Data":"5155fcfd0208a1f326202e74385175543df4236cb8aaf7939b68b4fedfc0f2e6"} Jan 28 11:33:59 crc kubenswrapper[4804]: I0128 11:33:59.405003 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5155fcfd0208a1f326202e74385175543df4236cb8aaf7939b68b4fedfc0f2e6" Jan 28 11:33:59 crc kubenswrapper[4804]: I0128 11:33:59.405012 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.529097 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc"] Jan 28 11:34:07 crc kubenswrapper[4804]: E0128 11:34:07.529929 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" containerName="storage" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.529943 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" containerName="storage" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.530065 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" containerName="storage" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.530944 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.532790 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.539462 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc"] Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.589765 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.590152 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.590180 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42kbt\" (UniqueName: \"kubernetes.io/projected/1622f571-d0d6-4247-b47e-4dda08dea3b3-kube-api-access-42kbt\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.691873 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.691956 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.691985 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42kbt\" (UniqueName: \"kubernetes.io/projected/1622f571-d0d6-4247-b47e-4dda08dea3b3-kube-api-access-42kbt\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.692374 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.692434 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.715105 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42kbt\" (UniqueName: \"kubernetes.io/projected/1622f571-d0d6-4247-b47e-4dda08dea3b3-kube-api-access-42kbt\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.853966 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:08 crc kubenswrapper[4804]: I0128 11:34:08.272802 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc"] Jan 28 11:34:08 crc kubenswrapper[4804]: W0128 11:34:08.280068 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1622f571_d0d6_4247_b47e_4dda08dea3b3.slice/crio-41da7abec0cee37dd6970a68064893f12dd62c68cd032c340f7495325cdb1106 WatchSource:0}: Error finding container 41da7abec0cee37dd6970a68064893f12dd62c68cd032c340f7495325cdb1106: Status 404 returned error can't find the container with id 41da7abec0cee37dd6970a68064893f12dd62c68cd032c340f7495325cdb1106 Jan 28 11:34:08 crc kubenswrapper[4804]: I0128 11:34:08.451145 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" event={"ID":"1622f571-d0d6-4247-b47e-4dda08dea3b3","Type":"ContainerStarted","Data":"41da7abec0cee37dd6970a68064893f12dd62c68cd032c340f7495325cdb1106"} Jan 28 11:34:09 crc kubenswrapper[4804]: I0128 11:34:09.458288 4804 generic.go:334] "Generic (PLEG): container finished" podID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerID="e047fdc2cf23333cb90977f35d7c25de83d795b4da978c00a4770e83e68a278d" exitCode=0 Jan 28 11:34:09 crc kubenswrapper[4804]: I0128 11:34:09.458326 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" event={"ID":"1622f571-d0d6-4247-b47e-4dda08dea3b3","Type":"ContainerDied","Data":"e047fdc2cf23333cb90977f35d7c25de83d795b4da978c00a4770e83e68a278d"} Jan 28 11:34:11 crc kubenswrapper[4804]: I0128 11:34:11.476687 4804 generic.go:334] "Generic (PLEG): container finished" podID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerID="b060a5c852789a2a12f3919e3783e22e4e12a30fe3f6d50bb9348c0d1cbbf2c3" exitCode=0 Jan 28 11:34:11 crc kubenswrapper[4804]: I0128 11:34:11.477113 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" event={"ID":"1622f571-d0d6-4247-b47e-4dda08dea3b3","Type":"ContainerDied","Data":"b060a5c852789a2a12f3919e3783e22e4e12a30fe3f6d50bb9348c0d1cbbf2c3"} Jan 28 11:34:12 crc kubenswrapper[4804]: I0128 11:34:12.484869 4804 generic.go:334] "Generic (PLEG): container finished" podID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerID="9b1c121b4786bda80a07c19c69fdefec554a2bf786e53da947c28d643e02ab69" exitCode=0 Jan 28 11:34:12 crc kubenswrapper[4804]: I0128 11:34:12.484986 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" event={"ID":"1622f571-d0d6-4247-b47e-4dda08dea3b3","Type":"ContainerDied","Data":"9b1c121b4786bda80a07c19c69fdefec554a2bf786e53da947c28d643e02ab69"} Jan 28 11:34:12 crc kubenswrapper[4804]: I0128 11:34:12.582655 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:34:12 crc kubenswrapper[4804]: I0128 11:34:12.582877 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.756665 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.876953 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-util\") pod \"1622f571-d0d6-4247-b47e-4dda08dea3b3\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.877665 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42kbt\" (UniqueName: \"kubernetes.io/projected/1622f571-d0d6-4247-b47e-4dda08dea3b3-kube-api-access-42kbt\") pod \"1622f571-d0d6-4247-b47e-4dda08dea3b3\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.877819 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-bundle\") pod \"1622f571-d0d6-4247-b47e-4dda08dea3b3\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.878500 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-bundle" (OuterVolumeSpecName: "bundle") pod "1622f571-d0d6-4247-b47e-4dda08dea3b3" (UID: "1622f571-d0d6-4247-b47e-4dda08dea3b3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.886048 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1622f571-d0d6-4247-b47e-4dda08dea3b3-kube-api-access-42kbt" (OuterVolumeSpecName: "kube-api-access-42kbt") pod "1622f571-d0d6-4247-b47e-4dda08dea3b3" (UID: "1622f571-d0d6-4247-b47e-4dda08dea3b3"). InnerVolumeSpecName "kube-api-access-42kbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.979271 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42kbt\" (UniqueName: \"kubernetes.io/projected/1622f571-d0d6-4247-b47e-4dda08dea3b3-kube-api-access-42kbt\") on node \"crc\" DevicePath \"\"" Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.979328 4804 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.074369 4804 scope.go:117] "RemoveContainer" containerID="888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d" Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.188952 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-util" (OuterVolumeSpecName: "util") pod "1622f571-d0d6-4247-b47e-4dda08dea3b3" (UID: "1622f571-d0d6-4247-b47e-4dda08dea3b3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.283736 4804 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-util\") on node \"crc\" DevicePath \"\"" Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.502580 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/2.log" Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.505161 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" event={"ID":"1622f571-d0d6-4247-b47e-4dda08dea3b3","Type":"ContainerDied","Data":"41da7abec0cee37dd6970a68064893f12dd62c68cd032c340f7495325cdb1106"} Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.505205 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41da7abec0cee37dd6970a68064893f12dd62c68cd032c340f7495325cdb1106" Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.505265 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.212589 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hzhkh"] Jan 28 11:34:16 crc kubenswrapper[4804]: E0128 11:34:16.213035 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="pull" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.213047 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="pull" Jan 28 11:34:16 crc kubenswrapper[4804]: E0128 11:34:16.213063 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="util" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.213070 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="util" Jan 28 11:34:16 crc kubenswrapper[4804]: E0128 11:34:16.213083 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="extract" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.213088 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="extract" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.213203 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="extract" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.213533 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.215980 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.218366 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rtnpc" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.219423 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.223457 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hzhkh"] Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.309389 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb4kr\" (UniqueName: \"kubernetes.io/projected/d478ae3c-a9f5-4f6e-ae30-1bd80027de73-kube-api-access-bb4kr\") pod \"nmstate-operator-646758c888-hzhkh\" (UID: \"d478ae3c-a9f5-4f6e-ae30-1bd80027de73\") " pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.410946 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb4kr\" (UniqueName: \"kubernetes.io/projected/d478ae3c-a9f5-4f6e-ae30-1bd80027de73-kube-api-access-bb4kr\") pod \"nmstate-operator-646758c888-hzhkh\" (UID: \"d478ae3c-a9f5-4f6e-ae30-1bd80027de73\") " pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.432863 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb4kr\" (UniqueName: \"kubernetes.io/projected/d478ae3c-a9f5-4f6e-ae30-1bd80027de73-kube-api-access-bb4kr\") pod \"nmstate-operator-646758c888-hzhkh\" (UID: \"d478ae3c-a9f5-4f6e-ae30-1bd80027de73\") " pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.530423 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.760516 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hzhkh"] Jan 28 11:34:17 crc kubenswrapper[4804]: I0128 11:34:17.521255 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" event={"ID":"d478ae3c-a9f5-4f6e-ae30-1bd80027de73","Type":"ContainerStarted","Data":"79d10a5e966e971eb72dbe65902340a083978afb3741f0bf00cfd4f0a6668320"} Jan 28 11:34:19 crc kubenswrapper[4804]: I0128 11:34:19.531545 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" event={"ID":"d478ae3c-a9f5-4f6e-ae30-1bd80027de73","Type":"ContainerStarted","Data":"ecfbb35b9160d7adb86b3e4795f3b4b95e278e5bd877832ca2eaa5e102f0211c"} Jan 28 11:34:19 crc kubenswrapper[4804]: I0128 11:34:19.550191 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" podStartSLOduration=1.3630722259999999 podStartE2EDuration="3.550173086s" podCreationTimestamp="2026-01-28 11:34:16 +0000 UTC" firstStartedPulling="2026-01-28 11:34:16.774066391 +0000 UTC m=+732.568946375" lastFinishedPulling="2026-01-28 11:34:18.961167261 +0000 UTC m=+734.756047235" observedRunningTime="2026-01-28 11:34:19.546065081 +0000 UTC m=+735.340945065" watchObservedRunningTime="2026-01-28 11:34:19.550173086 +0000 UTC m=+735.345053070" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.193079 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b2pq8"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.194601 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.197405 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-rs8h2" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.205654 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b2pq8"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.231744 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.232914 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.234823 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.236980 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-r6vm7"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.254238 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spnc8\" (UniqueName: \"kubernetes.io/projected/b63500d6-29e0-4eef-82cd-fdc0036ef0f2-kube-api-access-spnc8\") pod \"nmstate-metrics-54757c584b-b2pq8\" (UID: \"b63500d6-29e0-4eef-82cd-fdc0036ef0f2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.254339 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl2w5\" (UniqueName: \"kubernetes.io/projected/c17b2105-0264-4cf3-8204-e68ba577728e-kube-api-access-pl2w5\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.254375 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c17b2105-0264-4cf3-8204-e68ba577728e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.262007 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.262118 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.313219 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.313870 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.316248 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jbssz" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.316480 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.316524 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.322833 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.355692 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spnc8\" (UniqueName: \"kubernetes.io/projected/b63500d6-29e0-4eef-82cd-fdc0036ef0f2-kube-api-access-spnc8\") pod \"nmstate-metrics-54757c584b-b2pq8\" (UID: \"b63500d6-29e0-4eef-82cd-fdc0036ef0f2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.355744 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77313f93-489e-4da6-81bb-eec0c795e242-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.355782 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkh84\" (UniqueName: \"kubernetes.io/projected/a741d157-784a-4e3e-9e35-200d91f3aa47-kube-api-access-bkh84\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.355810 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-ovs-socket\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.355855 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-nmstate-lock\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.355969 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zh7d\" (UniqueName: \"kubernetes.io/projected/77313f93-489e-4da6-81bb-eec0c795e242-kube-api-access-4zh7d\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.356013 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/77313f93-489e-4da6-81bb-eec0c795e242-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.356059 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-dbus-socket\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.356090 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl2w5\" (UniqueName: \"kubernetes.io/projected/c17b2105-0264-4cf3-8204-e68ba577728e-kube-api-access-pl2w5\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.356139 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c17b2105-0264-4cf3-8204-e68ba577728e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: E0128 11:34:27.356246 4804 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 28 11:34:27 crc kubenswrapper[4804]: E0128 11:34:27.356290 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c17b2105-0264-4cf3-8204-e68ba577728e-tls-key-pair podName:c17b2105-0264-4cf3-8204-e68ba577728e nodeName:}" failed. No retries permitted until 2026-01-28 11:34:27.856273704 +0000 UTC m=+743.651153688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c17b2105-0264-4cf3-8204-e68ba577728e-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-c5t8z" (UID: "c17b2105-0264-4cf3-8204-e68ba577728e") : secret "openshift-nmstate-webhook" not found Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.374637 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl2w5\" (UniqueName: \"kubernetes.io/projected/c17b2105-0264-4cf3-8204-e68ba577728e-kube-api-access-pl2w5\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.379323 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spnc8\" (UniqueName: \"kubernetes.io/projected/b63500d6-29e0-4eef-82cd-fdc0036ef0f2-kube-api-access-spnc8\") pod \"nmstate-metrics-54757c584b-b2pq8\" (UID: \"b63500d6-29e0-4eef-82cd-fdc0036ef0f2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457001 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-ovs-socket\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457075 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-nmstate-lock\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457103 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zh7d\" (UniqueName: \"kubernetes.io/projected/77313f93-489e-4da6-81bb-eec0c795e242-kube-api-access-4zh7d\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457126 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/77313f93-489e-4da6-81bb-eec0c795e242-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457158 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-dbus-socket\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457155 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-ovs-socket\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457180 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-nmstate-lock\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457224 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77313f93-489e-4da6-81bb-eec0c795e242-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457549 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkh84\" (UniqueName: \"kubernetes.io/projected/a741d157-784a-4e3e-9e35-200d91f3aa47-kube-api-access-bkh84\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: E0128 11:34:27.457311 4804 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457604 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-dbus-socket\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: E0128 11:34:27.457631 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77313f93-489e-4da6-81bb-eec0c795e242-plugin-serving-cert podName:77313f93-489e-4da6-81bb-eec0c795e242 nodeName:}" failed. No retries permitted until 2026-01-28 11:34:27.957612037 +0000 UTC m=+743.752492021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/77313f93-489e-4da6-81bb-eec0c795e242-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-bbn52" (UID: "77313f93-489e-4da6-81bb-eec0c795e242") : secret "plugin-serving-cert" not found Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.458068 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/77313f93-489e-4da6-81bb-eec0c795e242-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.476527 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zh7d\" (UniqueName: \"kubernetes.io/projected/77313f93-489e-4da6-81bb-eec0c795e242-kube-api-access-4zh7d\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.477951 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkh84\" (UniqueName: \"kubernetes.io/projected/a741d157-784a-4e3e-9e35-200d91f3aa47-kube-api-access-bkh84\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.511482 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.525343 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d7d54b946-gb592"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.526170 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.539030 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d7d54b946-gb592"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.558882 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-service-ca\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.558958 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zvng\" (UniqueName: \"kubernetes.io/projected/411c17ba-96e6-4688-965a-16f19ebbdcec-kube-api-access-9zvng\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.558986 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/411c17ba-96e6-4688-965a-16f19ebbdcec-console-oauth-config\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.559011 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-oauth-serving-cert\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.559034 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-console-config\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.559067 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/411c17ba-96e6-4688-965a-16f19ebbdcec-console-serving-cert\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.559096 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-trusted-ca-bundle\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.588622 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: W0128 11:34:27.611136 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda741d157_784a_4e3e_9e35_200d91f3aa47.slice/crio-47857daa971eb238234c53495fc121c704b5def049f7084d7fdd38014222043d WatchSource:0}: Error finding container 47857daa971eb238234c53495fc121c704b5def049f7084d7fdd38014222043d: Status 404 returned error can't find the container with id 47857daa971eb238234c53495fc121c704b5def049f7084d7fdd38014222043d Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.660785 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-service-ca\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.660844 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zvng\" (UniqueName: \"kubernetes.io/projected/411c17ba-96e6-4688-965a-16f19ebbdcec-kube-api-access-9zvng\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.660876 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/411c17ba-96e6-4688-965a-16f19ebbdcec-console-oauth-config\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.660924 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-oauth-serving-cert\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.660944 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-console-config\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.661974 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-service-ca\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.662870 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/411c17ba-96e6-4688-965a-16f19ebbdcec-console-serving-cert\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.662977 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-trusted-ca-bundle\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.663842 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-oauth-serving-cert\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.664418 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-console-config\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.665009 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-trusted-ca-bundle\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.667564 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/411c17ba-96e6-4688-965a-16f19ebbdcec-console-serving-cert\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.668126 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/411c17ba-96e6-4688-965a-16f19ebbdcec-console-oauth-config\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.680390 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zvng\" (UniqueName: \"kubernetes.io/projected/411c17ba-96e6-4688-965a-16f19ebbdcec-kube-api-access-9zvng\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.725505 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b2pq8"] Jan 28 11:34:27 crc kubenswrapper[4804]: W0128 11:34:27.730353 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb63500d6_29e0_4eef_82cd_fdc0036ef0f2.slice/crio-fd52dd1687baaf7cc0b4821de0e58d189c15bcdd192a66cc6cd8e5940504de07 WatchSource:0}: Error finding container fd52dd1687baaf7cc0b4821de0e58d189c15bcdd192a66cc6cd8e5940504de07: Status 404 returned error can't find the container with id fd52dd1687baaf7cc0b4821de0e58d189c15bcdd192a66cc6cd8e5940504de07 Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.866067 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c17b2105-0264-4cf3-8204-e68ba577728e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.869622 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.869996 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c17b2105-0264-4cf3-8204-e68ba577728e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.877637 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.967080 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77313f93-489e-4da6-81bb-eec0c795e242-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.972503 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77313f93-489e-4da6-81bb-eec0c795e242-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.151953 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d7d54b946-gb592"] Jan 28 11:34:28 crc kubenswrapper[4804]: W0128 11:34:28.157035 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod411c17ba_96e6_4688_965a_16f19ebbdcec.slice/crio-5e7b7b8315a7938d527f0ed312364ea2e85da21a43357999f90049cf34bcef8c WatchSource:0}: Error finding container 5e7b7b8315a7938d527f0ed312364ea2e85da21a43357999f90049cf34bcef8c: Status 404 returned error can't find the container with id 5e7b7b8315a7938d527f0ed312364ea2e85da21a43357999f90049cf34bcef8c Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.184266 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z"] Jan 28 11:34:28 crc kubenswrapper[4804]: W0128 11:34:28.185020 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc17b2105_0264_4cf3_8204_e68ba577728e.slice/crio-6b5826c96f96c14bf2ca0eaf7ad140f9d44e767f7e41d2dccc3673aa873bd234 WatchSource:0}: Error finding container 6b5826c96f96c14bf2ca0eaf7ad140f9d44e767f7e41d2dccc3673aa873bd234: Status 404 returned error can't find the container with id 6b5826c96f96c14bf2ca0eaf7ad140f9d44e767f7e41d2dccc3673aa873bd234 Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.233064 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.442455 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52"] Jan 28 11:34:28 crc kubenswrapper[4804]: W0128 11:34:28.453917 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77313f93_489e_4da6_81bb_eec0c795e242.slice/crio-bbcdef9fe789e1e7c4579534e743556e2327187ef6b0a81776ffb7640753d5cb WatchSource:0}: Error finding container bbcdef9fe789e1e7c4579534e743556e2327187ef6b0a81776ffb7640753d5cb: Status 404 returned error can't find the container with id bbcdef9fe789e1e7c4579534e743556e2327187ef6b0a81776ffb7640753d5cb Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.581744 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" event={"ID":"c17b2105-0264-4cf3-8204-e68ba577728e","Type":"ContainerStarted","Data":"6b5826c96f96c14bf2ca0eaf7ad140f9d44e767f7e41d2dccc3673aa873bd234"} Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.583913 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d7d54b946-gb592" event={"ID":"411c17ba-96e6-4688-965a-16f19ebbdcec","Type":"ContainerStarted","Data":"95e5f88d129c9c06bb64df2eddf147aa9fc8153f0d9b27cabd95df7057c34ab2"} Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.583937 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d7d54b946-gb592" event={"ID":"411c17ba-96e6-4688-965a-16f19ebbdcec","Type":"ContainerStarted","Data":"5e7b7b8315a7938d527f0ed312364ea2e85da21a43357999f90049cf34bcef8c"} Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.585072 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" event={"ID":"b63500d6-29e0-4eef-82cd-fdc0036ef0f2","Type":"ContainerStarted","Data":"fd52dd1687baaf7cc0b4821de0e58d189c15bcdd192a66cc6cd8e5940504de07"} Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.586277 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-r6vm7" event={"ID":"a741d157-784a-4e3e-9e35-200d91f3aa47","Type":"ContainerStarted","Data":"47857daa971eb238234c53495fc121c704b5def049f7084d7fdd38014222043d"} Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.587703 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" event={"ID":"77313f93-489e-4da6-81bb-eec0c795e242","Type":"ContainerStarted","Data":"bbcdef9fe789e1e7c4579534e743556e2327187ef6b0a81776ffb7640753d5cb"} Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.606604 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d7d54b946-gb592" podStartSLOduration=1.606588575 podStartE2EDuration="1.606588575s" podCreationTimestamp="2026-01-28 11:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:34:28.603315188 +0000 UTC m=+744.398195172" watchObservedRunningTime="2026-01-28 11:34:28.606588575 +0000 UTC m=+744.401468559" Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.612294 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" event={"ID":"c17b2105-0264-4cf3-8204-e68ba577728e","Type":"ContainerStarted","Data":"6d66be5e6e8c32470ffd605087adb97daf341cc67035fbb7d6b8127daf872569"} Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.612912 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.615066 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-r6vm7" event={"ID":"a741d157-784a-4e3e-9e35-200d91f3aa47","Type":"ContainerStarted","Data":"5f96900f275a52cd5a01b9936039699dacb96bff3d36eb1ad44430e8232ed64c"} Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.615257 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.616686 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" event={"ID":"b63500d6-29e0-4eef-82cd-fdc0036ef0f2","Type":"ContainerStarted","Data":"5b18fe7fcd742562ae9b731d6534622502ea8a4cec123fb757da8420c3864356"} Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.618055 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" event={"ID":"77313f93-489e-4da6-81bb-eec0c795e242","Type":"ContainerStarted","Data":"7665f99a4d0917bbdb5dfb7c4fc2e0de9eba89fd20a8f0355991f8766f1d9ffa"} Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.633106 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" podStartSLOduration=1.714348889 podStartE2EDuration="4.633085641s" podCreationTimestamp="2026-01-28 11:34:27 +0000 UTC" firstStartedPulling="2026-01-28 11:34:28.186944144 +0000 UTC m=+743.981824128" lastFinishedPulling="2026-01-28 11:34:31.105680896 +0000 UTC m=+746.900560880" observedRunningTime="2026-01-28 11:34:31.628774719 +0000 UTC m=+747.423654703" watchObservedRunningTime="2026-01-28 11:34:31.633085641 +0000 UTC m=+747.427965625" Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.674651 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-r6vm7" podStartSLOduration=1.152941929 podStartE2EDuration="4.674632833s" podCreationTimestamp="2026-01-28 11:34:27 +0000 UTC" firstStartedPulling="2026-01-28 11:34:27.612968522 +0000 UTC m=+743.407848506" lastFinishedPulling="2026-01-28 11:34:31.134659426 +0000 UTC m=+746.929539410" observedRunningTime="2026-01-28 11:34:31.671349445 +0000 UTC m=+747.466229429" watchObservedRunningTime="2026-01-28 11:34:31.674632833 +0000 UTC m=+747.469512817" Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.676682 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" podStartSLOduration=2.029802883 podStartE2EDuration="4.67667506s" podCreationTimestamp="2026-01-28 11:34:27 +0000 UTC" firstStartedPulling="2026-01-28 11:34:28.456490463 +0000 UTC m=+744.251370447" lastFinishedPulling="2026-01-28 11:34:31.10336265 +0000 UTC m=+746.898242624" observedRunningTime="2026-01-28 11:34:31.653257172 +0000 UTC m=+747.448137146" watchObservedRunningTime="2026-01-28 11:34:31.67667506 +0000 UTC m=+747.471555044" Jan 28 11:34:33 crc kubenswrapper[4804]: I0128 11:34:33.632009 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" event={"ID":"b63500d6-29e0-4eef-82cd-fdc0036ef0f2","Type":"ContainerStarted","Data":"a77c121d900e7c4e29c8557c6d5b43c4d7c972f50bf1ea53116010408022eb64"} Jan 28 11:34:37 crc kubenswrapper[4804]: I0128 11:34:37.616848 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:37 crc kubenswrapper[4804]: I0128 11:34:37.648710 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" podStartSLOduration=5.175828389 podStartE2EDuration="10.648673506s" podCreationTimestamp="2026-01-28 11:34:27 +0000 UTC" firstStartedPulling="2026-01-28 11:34:27.732885784 +0000 UTC m=+743.527765768" lastFinishedPulling="2026-01-28 11:34:33.205730911 +0000 UTC m=+749.000610885" observedRunningTime="2026-01-28 11:34:33.648684206 +0000 UTC m=+749.443564200" watchObservedRunningTime="2026-01-28 11:34:37.648673506 +0000 UTC m=+753.443553540" Jan 28 11:34:37 crc kubenswrapper[4804]: I0128 11:34:37.870771 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:37 crc kubenswrapper[4804]: I0128 11:34:37.870877 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:37 crc kubenswrapper[4804]: I0128 11:34:37.875418 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:38 crc kubenswrapper[4804]: I0128 11:34:38.671617 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:38 crc kubenswrapper[4804]: I0128 11:34:38.745155 4804 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 11:34:38 crc kubenswrapper[4804]: I0128 11:34:38.759397 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xghdb"] Jan 28 11:34:42 crc kubenswrapper[4804]: I0128 11:34:42.582169 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:34:42 crc kubenswrapper[4804]: I0128 11:34:42.582518 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:34:42 crc kubenswrapper[4804]: I0128 11:34:42.582587 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:34:42 crc kubenswrapper[4804]: I0128 11:34:42.583077 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"493f3a58ce9c84e61c12f35c4be8ff28af2862f186b7fdf44e3a4a848a20107b"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:34:42 crc kubenswrapper[4804]: I0128 11:34:42.583133 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://493f3a58ce9c84e61c12f35c4be8ff28af2862f186b7fdf44e3a4a848a20107b" gracePeriod=600 Jan 28 11:34:43 crc kubenswrapper[4804]: I0128 11:34:43.699905 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="493f3a58ce9c84e61c12f35c4be8ff28af2862f186b7fdf44e3a4a848a20107b" exitCode=0 Jan 28 11:34:43 crc kubenswrapper[4804]: I0128 11:34:43.699940 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"493f3a58ce9c84e61c12f35c4be8ff28af2862f186b7fdf44e3a4a848a20107b"} Jan 28 11:34:43 crc kubenswrapper[4804]: I0128 11:34:43.700472 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"e2d1117c737baf6cd27ef1229c3435bfc59febfb941c2b84b434e736df46abc8"} Jan 28 11:34:43 crc kubenswrapper[4804]: I0128 11:34:43.700493 4804 scope.go:117] "RemoveContainer" containerID="c7b5b9b6b8ef791eab510f91481d8192b718ad6748767af1fa3c3c5a88adba6c" Jan 28 11:34:47 crc kubenswrapper[4804]: I0128 11:34:47.889196 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.789423 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh"] Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.791854 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.795406 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.809344 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh"] Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.977692 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.977743 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8nsc\" (UniqueName: \"kubernetes.io/projected/1c8da098-aace-4ed5-8846-6fff6aee19be-kube-api-access-w8nsc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.977791 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.079617 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.079764 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8nsc\" (UniqueName: \"kubernetes.io/projected/1c8da098-aace-4ed5-8846-6fff6aee19be-kube-api-access-w8nsc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.079889 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.080408 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.080455 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.107138 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8nsc\" (UniqueName: \"kubernetes.io/projected/1c8da098-aace-4ed5-8846-6fff6aee19be-kube-api-access-w8nsc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.111540 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.317208 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh"] Jan 28 11:35:01 crc kubenswrapper[4804]: W0128 11:35:01.326474 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c8da098_aace_4ed5_8846_6fff6aee19be.slice/crio-de0169d1b66bb0623ae11e6f50c444d3b41e2a49908a785e593d432763ef543b WatchSource:0}: Error finding container de0169d1b66bb0623ae11e6f50c444d3b41e2a49908a785e593d432763ef543b: Status 404 returned error can't find the container with id de0169d1b66bb0623ae11e6f50c444d3b41e2a49908a785e593d432763ef543b Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.825666 4804 generic.go:334] "Generic (PLEG): container finished" podID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerID="16087a95badf2d73ede6e46431321e6810ab78fafe1112764ae45ce7d1f66d24" exitCode=0 Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.825765 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" event={"ID":"1c8da098-aace-4ed5-8846-6fff6aee19be","Type":"ContainerDied","Data":"16087a95badf2d73ede6e46431321e6810ab78fafe1112764ae45ce7d1f66d24"} Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.826148 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" event={"ID":"1c8da098-aace-4ed5-8846-6fff6aee19be","Type":"ContainerStarted","Data":"de0169d1b66bb0623ae11e6f50c444d3b41e2a49908a785e593d432763ef543b"} Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.120214 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kjhw2"] Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.121778 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.134058 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjhw2"] Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.209604 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-catalog-content\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.209756 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-utilities\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.209782 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p27zt\" (UniqueName: \"kubernetes.io/projected/58d15df8-f5ee-4982-87f1-af5e3ec371ba-kube-api-access-p27zt\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.310515 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-utilities\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.310573 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p27zt\" (UniqueName: \"kubernetes.io/projected/58d15df8-f5ee-4982-87f1-af5e3ec371ba-kube-api-access-p27zt\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.310606 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-catalog-content\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.311128 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-utilities\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.311167 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-catalog-content\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.328711 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p27zt\" (UniqueName: \"kubernetes.io/projected/58d15df8-f5ee-4982-87f1-af5e3ec371ba-kube-api-access-p27zt\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.443118 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.652427 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjhw2"] Jan 28 11:35:03 crc kubenswrapper[4804]: W0128 11:35:03.665524 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58d15df8_f5ee_4982_87f1_af5e3ec371ba.slice/crio-8a966fc586e419cf67f7b38fc6541cd18a4eae4fc2047fa91dcc484c80d020a8 WatchSource:0}: Error finding container 8a966fc586e419cf67f7b38fc6541cd18a4eae4fc2047fa91dcc484c80d020a8: Status 404 returned error can't find the container with id 8a966fc586e419cf67f7b38fc6541cd18a4eae4fc2047fa91dcc484c80d020a8 Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.810393 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-xghdb" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerName="console" containerID="cri-o://be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f" gracePeriod=15 Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.838824 4804 generic.go:334] "Generic (PLEG): container finished" podID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerID="ef47a0f93824c160c7b1829633f77e89678a9d3040c426b0d6233119c875e72f" exitCode=0 Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.838896 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerDied","Data":"ef47a0f93824c160c7b1829633f77e89678a9d3040c426b0d6233119c875e72f"} Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.839000 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerStarted","Data":"8a966fc586e419cf67f7b38fc6541cd18a4eae4fc2047fa91dcc484c80d020a8"} Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.841873 4804 generic.go:334] "Generic (PLEG): container finished" podID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerID="06684d1398fab6fb63e2ee12ea3e1967dbea452d5ada621bb30b4e1ff8f87295" exitCode=0 Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.841925 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" event={"ID":"1c8da098-aace-4ed5-8846-6fff6aee19be","Type":"ContainerDied","Data":"06684d1398fab6fb63e2ee12ea3e1967dbea452d5ada621bb30b4e1ff8f87295"} Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.116736 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xghdb_bf13c867-7c3e-4845-a6c8-f25700c31666/console/0.log" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.116805 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221010 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtt9h\" (UniqueName: \"kubernetes.io/projected/bf13c867-7c3e-4845-a6c8-f25700c31666-kube-api-access-dtt9h\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221321 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-oauth-serving-cert\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221350 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-service-ca\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221393 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-console-config\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221453 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-serving-cert\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221484 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-oauth-config\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221511 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-trusted-ca-bundle\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.222816 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-console-config" (OuterVolumeSpecName: "console-config") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.223090 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-service-ca" (OuterVolumeSpecName: "service-ca") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.223230 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.223360 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.227541 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf13c867-7c3e-4845-a6c8-f25700c31666-kube-api-access-dtt9h" (OuterVolumeSpecName: "kube-api-access-dtt9h") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "kube-api-access-dtt9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.228389 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.228912 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.327944 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtt9h\" (UniqueName: \"kubernetes.io/projected/bf13c867-7c3e-4845-a6c8-f25700c31666-kube-api-access-dtt9h\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.327977 4804 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.327995 4804 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.328006 4804 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.328019 4804 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.328030 4804 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.328040 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.867941 4804 generic.go:334] "Generic (PLEG): container finished" podID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerID="a3c8691c51f616b7604b0930041e17ae2ec23f7cf62d3089ecd56dc16dca0b5b" exitCode=0 Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.868022 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" event={"ID":"1c8da098-aace-4ed5-8846-6fff6aee19be","Type":"ContainerDied","Data":"a3c8691c51f616b7604b0930041e17ae2ec23f7cf62d3089ecd56dc16dca0b5b"} Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.871686 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xghdb_bf13c867-7c3e-4845-a6c8-f25700c31666/console/0.log" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.871729 4804 generic.go:334] "Generic (PLEG): container finished" podID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerID="be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f" exitCode=2 Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.871773 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xghdb" event={"ID":"bf13c867-7c3e-4845-a6c8-f25700c31666","Type":"ContainerDied","Data":"be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f"} Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.871794 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xghdb" event={"ID":"bf13c867-7c3e-4845-a6c8-f25700c31666","Type":"ContainerDied","Data":"39c6be7d2c6b604e29ab674e70547e5294e550d001aed4bfc7286a6d8fd167c8"} Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.871813 4804 scope.go:117] "RemoveContainer" containerID="be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.872018 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.877687 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerStarted","Data":"b7a3e5ad643505de7205638ab542b48003d62d08ea5a488d01fba8a7a1e4e731"} Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.905823 4804 scope.go:117] "RemoveContainer" containerID="be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f" Jan 28 11:35:04 crc kubenswrapper[4804]: E0128 11:35:04.907222 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f\": container with ID starting with be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f not found: ID does not exist" containerID="be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.907603 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f"} err="failed to get container status \"be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f\": rpc error: code = NotFound desc = could not find container \"be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f\": container with ID starting with be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f not found: ID does not exist" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.964368 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xghdb"] Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.969323 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-xghdb"] Jan 28 11:35:05 crc kubenswrapper[4804]: I0128 11:35:05.889740 4804 generic.go:334] "Generic (PLEG): container finished" podID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerID="b7a3e5ad643505de7205638ab542b48003d62d08ea5a488d01fba8a7a1e4e731" exitCode=0 Jan 28 11:35:05 crc kubenswrapper[4804]: I0128 11:35:05.889785 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerDied","Data":"b7a3e5ad643505de7205638ab542b48003d62d08ea5a488d01fba8a7a1e4e731"} Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.144558 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.251305 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8nsc\" (UniqueName: \"kubernetes.io/projected/1c8da098-aace-4ed5-8846-6fff6aee19be-kube-api-access-w8nsc\") pod \"1c8da098-aace-4ed5-8846-6fff6aee19be\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.251419 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-util\") pod \"1c8da098-aace-4ed5-8846-6fff6aee19be\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.251459 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-bundle\") pod \"1c8da098-aace-4ed5-8846-6fff6aee19be\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.253136 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-bundle" (OuterVolumeSpecName: "bundle") pod "1c8da098-aace-4ed5-8846-6fff6aee19be" (UID: "1c8da098-aace-4ed5-8846-6fff6aee19be"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.259140 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8da098-aace-4ed5-8846-6fff6aee19be-kube-api-access-w8nsc" (OuterVolumeSpecName: "kube-api-access-w8nsc") pod "1c8da098-aace-4ed5-8846-6fff6aee19be" (UID: "1c8da098-aace-4ed5-8846-6fff6aee19be"). InnerVolumeSpecName "kube-api-access-w8nsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.273108 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-util" (OuterVolumeSpecName: "util") pod "1c8da098-aace-4ed5-8846-6fff6aee19be" (UID: "1c8da098-aace-4ed5-8846-6fff6aee19be"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.352780 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8nsc\" (UniqueName: \"kubernetes.io/projected/1c8da098-aace-4ed5-8846-6fff6aee19be-kube-api-access-w8nsc\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.352833 4804 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-util\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.352848 4804 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.897834 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" event={"ID":"1c8da098-aace-4ed5-8846-6fff6aee19be","Type":"ContainerDied","Data":"de0169d1b66bb0623ae11e6f50c444d3b41e2a49908a785e593d432763ef543b"} Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.897877 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de0169d1b66bb0623ae11e6f50c444d3b41e2a49908a785e593d432763ef543b" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.897920 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.923862 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" path="/var/lib/kubelet/pods/bf13c867-7c3e-4845-a6c8-f25700c31666/volumes" Jan 28 11:35:07 crc kubenswrapper[4804]: I0128 11:35:07.905466 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerStarted","Data":"a3ce9283976204b122c4c548f11f6aac4a29ddef5a3bfd09f9aeda1484fdf490"} Jan 28 11:35:07 crc kubenswrapper[4804]: I0128 11:35:07.929528 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kjhw2" podStartSLOduration=1.761607767 podStartE2EDuration="4.92951005s" podCreationTimestamp="2026-01-28 11:35:03 +0000 UTC" firstStartedPulling="2026-01-28 11:35:03.840029856 +0000 UTC m=+779.634909840" lastFinishedPulling="2026-01-28 11:35:07.007932139 +0000 UTC m=+782.802812123" observedRunningTime="2026-01-28 11:35:07.921440225 +0000 UTC m=+783.716320239" watchObservedRunningTime="2026-01-28 11:35:07.92951005 +0000 UTC m=+783.724390044" Jan 28 11:35:13 crc kubenswrapper[4804]: I0128 11:35:13.444224 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:13 crc kubenswrapper[4804]: I0128 11:35:13.444330 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:13 crc kubenswrapper[4804]: I0128 11:35:13.500352 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:13 crc kubenswrapper[4804]: I0128 11:35:13.974582 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:14 crc kubenswrapper[4804]: E0128 11:35:14.555459 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf13c867_7c3e_4845_a6c8_f25700c31666.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:35:14 crc kubenswrapper[4804]: I0128 11:35:14.911914 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kjhw2"] Jan 28 11:35:15 crc kubenswrapper[4804]: I0128 11:35:15.944468 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kjhw2" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="registry-server" containerID="cri-o://a3ce9283976204b122c4c548f11f6aac4a29ddef5a3bfd09f9aeda1484fdf490" gracePeriod=2 Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.086925 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr"] Jan 28 11:35:16 crc kubenswrapper[4804]: E0128 11:35:16.087159 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="pull" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087171 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="pull" Jan 28 11:35:16 crc kubenswrapper[4804]: E0128 11:35:16.087184 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="extract" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087191 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="extract" Jan 28 11:35:16 crc kubenswrapper[4804]: E0128 11:35:16.087205 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerName="console" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087211 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerName="console" Jan 28 11:35:16 crc kubenswrapper[4804]: E0128 11:35:16.087222 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="util" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087228 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="util" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087317 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="extract" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087326 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerName="console" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087877 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.090851 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.091164 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.091305 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-csf86" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.091432 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.105800 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.161068 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr"] Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.291841 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0eda12d-b723-4a3a-8f2b-916de07b279c-webhook-cert\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.291954 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp5hl\" (UniqueName: \"kubernetes.io/projected/a0eda12d-b723-4a3a-8f2b-916de07b279c-kube-api-access-tp5hl\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.291993 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0eda12d-b723-4a3a-8f2b-916de07b279c-apiservice-cert\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.337359 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427"] Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.338312 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.345354 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.345565 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4pbr9" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.345636 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.349807 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427"] Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.393541 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp5hl\" (UniqueName: \"kubernetes.io/projected/a0eda12d-b723-4a3a-8f2b-916de07b279c-kube-api-access-tp5hl\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.393608 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0eda12d-b723-4a3a-8f2b-916de07b279c-apiservice-cert\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.393641 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13606290-8fc4-4792-a328-207ee9a1994e-apiservice-cert\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.393676 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0eda12d-b723-4a3a-8f2b-916de07b279c-webhook-cert\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.393723 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlpw8\" (UniqueName: \"kubernetes.io/projected/13606290-8fc4-4792-a328-207ee9a1994e-kube-api-access-zlpw8\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.393764 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13606290-8fc4-4792-a328-207ee9a1994e-webhook-cert\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.399770 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0eda12d-b723-4a3a-8f2b-916de07b279c-apiservice-cert\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.399757 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0eda12d-b723-4a3a-8f2b-916de07b279c-webhook-cert\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.410627 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp5hl\" (UniqueName: \"kubernetes.io/projected/a0eda12d-b723-4a3a-8f2b-916de07b279c-kube-api-access-tp5hl\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.494680 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13606290-8fc4-4792-a328-207ee9a1994e-apiservice-cert\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.495140 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlpw8\" (UniqueName: \"kubernetes.io/projected/13606290-8fc4-4792-a328-207ee9a1994e-kube-api-access-zlpw8\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.495189 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13606290-8fc4-4792-a328-207ee9a1994e-webhook-cert\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.497633 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13606290-8fc4-4792-a328-207ee9a1994e-apiservice-cert\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.498364 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13606290-8fc4-4792-a328-207ee9a1994e-webhook-cert\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.512477 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlpw8\" (UniqueName: \"kubernetes.io/projected/13606290-8fc4-4792-a328-207ee9a1994e-kube-api-access-zlpw8\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.675120 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.705998 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.965461 4804 generic.go:334] "Generic (PLEG): container finished" podID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerID="a3ce9283976204b122c4c548f11f6aac4a29ddef5a3bfd09f9aeda1484fdf490" exitCode=0 Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.965507 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerDied","Data":"a3ce9283976204b122c4c548f11f6aac4a29ddef5a3bfd09f9aeda1484fdf490"} Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.102574 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.168565 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427"] Jan 28 11:35:17 crc kubenswrapper[4804]: W0128 11:35:17.178704 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13606290_8fc4_4792_a328_207ee9a1994e.slice/crio-38d940d65cf11503d2d123068534aabfbaba360d993dd470f8f1177c21136a63 WatchSource:0}: Error finding container 38d940d65cf11503d2d123068534aabfbaba360d993dd470f8f1177c21136a63: Status 404 returned error can't find the container with id 38d940d65cf11503d2d123068534aabfbaba360d993dd470f8f1177c21136a63 Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.206765 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-catalog-content\") pod \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.206876 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p27zt\" (UniqueName: \"kubernetes.io/projected/58d15df8-f5ee-4982-87f1-af5e3ec371ba-kube-api-access-p27zt\") pod \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.206978 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-utilities\") pod \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.208153 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-utilities" (OuterVolumeSpecName: "utilities") pod "58d15df8-f5ee-4982-87f1-af5e3ec371ba" (UID: "58d15df8-f5ee-4982-87f1-af5e3ec371ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.211316 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d15df8-f5ee-4982-87f1-af5e3ec371ba-kube-api-access-p27zt" (OuterVolumeSpecName: "kube-api-access-p27zt") pod "58d15df8-f5ee-4982-87f1-af5e3ec371ba" (UID: "58d15df8-f5ee-4982-87f1-af5e3ec371ba"). InnerVolumeSpecName "kube-api-access-p27zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.295849 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr"] Jan 28 11:35:17 crc kubenswrapper[4804]: W0128 11:35:17.300514 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0eda12d_b723_4a3a_8f2b_916de07b279c.slice/crio-7ed7e10327b6e2b559275ccb42826da113d17b2ccbf73ec61a413b0d01769da8 WatchSource:0}: Error finding container 7ed7e10327b6e2b559275ccb42826da113d17b2ccbf73ec61a413b0d01769da8: Status 404 returned error can't find the container with id 7ed7e10327b6e2b559275ccb42826da113d17b2ccbf73ec61a413b0d01769da8 Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.307859 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p27zt\" (UniqueName: \"kubernetes.io/projected/58d15df8-f5ee-4982-87f1-af5e3ec371ba-kube-api-access-p27zt\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.307893 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.330430 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58d15df8-f5ee-4982-87f1-af5e3ec371ba" (UID: "58d15df8-f5ee-4982-87f1-af5e3ec371ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.409240 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.972861 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerDied","Data":"8a966fc586e419cf67f7b38fc6541cd18a4eae4fc2047fa91dcc484c80d020a8"} Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.972929 4804 scope.go:117] "RemoveContainer" containerID="a3ce9283976204b122c4c548f11f6aac4a29ddef5a3bfd09f9aeda1484fdf490" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.973041 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.985928 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" event={"ID":"a0eda12d-b723-4a3a-8f2b-916de07b279c","Type":"ContainerStarted","Data":"7ed7e10327b6e2b559275ccb42826da113d17b2ccbf73ec61a413b0d01769da8"} Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.989553 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" event={"ID":"13606290-8fc4-4792-a328-207ee9a1994e","Type":"ContainerStarted","Data":"38d940d65cf11503d2d123068534aabfbaba360d993dd470f8f1177c21136a63"} Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.998228 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kjhw2"] Jan 28 11:35:18 crc kubenswrapper[4804]: I0128 11:35:18.004672 4804 scope.go:117] "RemoveContainer" containerID="b7a3e5ad643505de7205638ab542b48003d62d08ea5a488d01fba8a7a1e4e731" Jan 28 11:35:18 crc kubenswrapper[4804]: I0128 11:35:18.007040 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kjhw2"] Jan 28 11:35:18 crc kubenswrapper[4804]: I0128 11:35:18.024036 4804 scope.go:117] "RemoveContainer" containerID="ef47a0f93824c160c7b1829633f77e89678a9d3040c426b0d6233119c875e72f" Jan 28 11:35:18 crc kubenswrapper[4804]: I0128 11:35:18.924089 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" path="/var/lib/kubelet/pods/58d15df8-f5ee-4982-87f1-af5e3ec371ba/volumes" Jan 28 11:35:23 crc kubenswrapper[4804]: I0128 11:35:23.020718 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" event={"ID":"13606290-8fc4-4792-a328-207ee9a1994e","Type":"ContainerStarted","Data":"9972640d9b749e4a4f568799e85cef0cb711c4450d8a295c95f987cc8e1e6c6f"} Jan 28 11:35:23 crc kubenswrapper[4804]: I0128 11:35:23.021155 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:23 crc kubenswrapper[4804]: I0128 11:35:23.022418 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" event={"ID":"a0eda12d-b723-4a3a-8f2b-916de07b279c","Type":"ContainerStarted","Data":"13c9557b28ef9973ccd811e27b5368c68d7053d774b9caec17246f58f322b60b"} Jan 28 11:35:23 crc kubenswrapper[4804]: I0128 11:35:23.022564 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:23 crc kubenswrapper[4804]: I0128 11:35:23.038748 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" podStartSLOduration=2.132870489 podStartE2EDuration="7.038730643s" podCreationTimestamp="2026-01-28 11:35:16 +0000 UTC" firstStartedPulling="2026-01-28 11:35:17.18083106 +0000 UTC m=+792.975711044" lastFinishedPulling="2026-01-28 11:35:22.086691214 +0000 UTC m=+797.881571198" observedRunningTime="2026-01-28 11:35:23.037636327 +0000 UTC m=+798.832516311" watchObservedRunningTime="2026-01-28 11:35:23.038730643 +0000 UTC m=+798.833610627" Jan 28 11:35:23 crc kubenswrapper[4804]: I0128 11:35:23.055797 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" podStartSLOduration=2.287949905 podStartE2EDuration="7.055775682s" podCreationTimestamp="2026-01-28 11:35:16 +0000 UTC" firstStartedPulling="2026-01-28 11:35:17.303478062 +0000 UTC m=+793.098358036" lastFinishedPulling="2026-01-28 11:35:22.071303829 +0000 UTC m=+797.866183813" observedRunningTime="2026-01-28 11:35:23.054396597 +0000 UTC m=+798.849276581" watchObservedRunningTime="2026-01-28 11:35:23.055775682 +0000 UTC m=+798.850655666" Jan 28 11:35:24 crc kubenswrapper[4804]: E0128 11:35:24.679449 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf13c867_7c3e_4845_a6c8_f25700c31666.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:35:34 crc kubenswrapper[4804]: E0128 11:35:34.831850 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf13c867_7c3e_4845_a6c8_f25700c31666.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:35:36 crc kubenswrapper[4804]: I0128 11:35:36.679984 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:44 crc kubenswrapper[4804]: E0128 11:35:44.961477 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf13c867_7c3e_4845_a6c8_f25700c31666.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:35:55 crc kubenswrapper[4804]: E0128 11:35:55.085923 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf13c867_7c3e_4845_a6c8_f25700c31666.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:35:56 crc kubenswrapper[4804]: I0128 11:35:56.709175 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.426384 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6"] Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.426600 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="registry-server" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.426613 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="registry-server" Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.426629 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="extract-content" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.426635 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="extract-content" Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.426652 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="extract-utilities" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.426658 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="extract-utilities" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.426760 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="registry-server" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.427157 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.429532 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.429599 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cwkld" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.440431 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5kdlz"] Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.443350 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.444243 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6"] Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.445632 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.445819 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.513212 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kcvj8"] Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.514304 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.515909 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.518609 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.518612 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.519301 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7kw6p" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.521869 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-rfhfx"] Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.523035 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.526330 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.545005 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-rfhfx"] Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593353 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9pgl\" (UniqueName: \"kubernetes.io/projected/1ae74e9e-799f-46bb-9a53-c8307c83203d-kube-api-access-k9pgl\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593410 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-cert\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593436 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593458 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-frr-conf\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593484 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npbcx\" (UniqueName: \"kubernetes.io/projected/3ce00c89-f00d-43aa-9907-77bf331c3dbd-kube-api-access-npbcx\") pod \"frr-k8s-webhook-server-7df86c4f6c-cvlt6\" (UID: \"3ce00c89-f00d-43aa-9907-77bf331c3dbd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593544 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltg8n\" (UniqueName: \"kubernetes.io/projected/2fa1df7e-03c8-4931-ad89-222acae36030-kube-api-access-ltg8n\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593598 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ce00c89-f00d-43aa-9907-77bf331c3dbd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-cvlt6\" (UID: \"3ce00c89-f00d-43aa-9907-77bf331c3dbd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593625 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-reloader\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593649 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-frr-sockets\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593687 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-metrics-certs\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593705 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-metrics\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593739 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2fa1df7e-03c8-4931-ad89-222acae36030-metallb-excludel2\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593761 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/45631116-4b02-448f-9158-18eaae682d9d-frr-startup\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593781 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-metrics-certs\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593802 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrs7\" (UniqueName: \"kubernetes.io/projected/45631116-4b02-448f-9158-18eaae682d9d-kube-api-access-vdrs7\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593821 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45631116-4b02-448f-9158-18eaae682d9d-metrics-certs\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695489 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9pgl\" (UniqueName: \"kubernetes.io/projected/1ae74e9e-799f-46bb-9a53-c8307c83203d-kube-api-access-k9pgl\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695559 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-cert\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695587 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695610 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-frr-conf\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695639 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npbcx\" (UniqueName: \"kubernetes.io/projected/3ce00c89-f00d-43aa-9907-77bf331c3dbd-kube-api-access-npbcx\") pod \"frr-k8s-webhook-server-7df86c4f6c-cvlt6\" (UID: \"3ce00c89-f00d-43aa-9907-77bf331c3dbd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695670 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltg8n\" (UniqueName: \"kubernetes.io/projected/2fa1df7e-03c8-4931-ad89-222acae36030-kube-api-access-ltg8n\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695699 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ce00c89-f00d-43aa-9907-77bf331c3dbd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-cvlt6\" (UID: \"3ce00c89-f00d-43aa-9907-77bf331c3dbd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695723 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-reloader\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695745 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-frr-sockets\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.695758 4804 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695777 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-metrics-certs\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695797 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-metrics\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.695831 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist podName:2fa1df7e-03c8-4931-ad89-222acae36030 nodeName:}" failed. No retries permitted until 2026-01-28 11:35:58.195810413 +0000 UTC m=+833.990690397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist") pod "speaker-kcvj8" (UID: "2fa1df7e-03c8-4931-ad89-222acae36030") : secret "metallb-memberlist" not found Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695855 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2fa1df7e-03c8-4931-ad89-222acae36030-metallb-excludel2\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695912 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/45631116-4b02-448f-9158-18eaae682d9d-frr-startup\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695939 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-metrics-certs\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695969 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdrs7\" (UniqueName: \"kubernetes.io/projected/45631116-4b02-448f-9158-18eaae682d9d-kube-api-access-vdrs7\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695985 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45631116-4b02-448f-9158-18eaae682d9d-metrics-certs\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.696280 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-metrics\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.696503 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-reloader\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.696651 4804 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.697036 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-metrics-certs podName:1ae74e9e-799f-46bb-9a53-c8307c83203d nodeName:}" failed. No retries permitted until 2026-01-28 11:35:58.197009663 +0000 UTC m=+833.991889657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-metrics-certs") pod "controller-6968d8fdc4-rfhfx" (UID: "1ae74e9e-799f-46bb-9a53-c8307c83203d") : secret "controller-certs-secret" not found Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.696756 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-frr-conf\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.697303 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2fa1df7e-03c8-4931-ad89-222acae36030-metallb-excludel2\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.700678 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/45631116-4b02-448f-9158-18eaae682d9d-frr-startup\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.700958 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-frr-sockets\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.702380 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ce00c89-f00d-43aa-9907-77bf331c3dbd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-cvlt6\" (UID: \"3ce00c89-f00d-43aa-9907-77bf331c3dbd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.703326 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-cert\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.706406 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45631116-4b02-448f-9158-18eaae682d9d-metrics-certs\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.712715 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-metrics-certs\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.728508 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9pgl\" (UniqueName: \"kubernetes.io/projected/1ae74e9e-799f-46bb-9a53-c8307c83203d-kube-api-access-k9pgl\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.731526 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdrs7\" (UniqueName: \"kubernetes.io/projected/45631116-4b02-448f-9158-18eaae682d9d-kube-api-access-vdrs7\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.750745 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npbcx\" (UniqueName: \"kubernetes.io/projected/3ce00c89-f00d-43aa-9907-77bf331c3dbd-kube-api-access-npbcx\") pod \"frr-k8s-webhook-server-7df86c4f6c-cvlt6\" (UID: \"3ce00c89-f00d-43aa-9907-77bf331c3dbd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.751520 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltg8n\" (UniqueName: \"kubernetes.io/projected/2fa1df7e-03c8-4931-ad89-222acae36030-kube-api-access-ltg8n\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.760586 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.043922 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.201251 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.201645 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-metrics-certs\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:58 crc kubenswrapper[4804]: E0128 11:35:58.202076 4804 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 11:35:58 crc kubenswrapper[4804]: E0128 11:35:58.202165 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist podName:2fa1df7e-03c8-4931-ad89-222acae36030 nodeName:}" failed. No retries permitted until 2026-01-28 11:35:59.202139517 +0000 UTC m=+834.997019531 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist") pod "speaker-kcvj8" (UID: "2fa1df7e-03c8-4931-ad89-222acae36030") : secret "metallb-memberlist" not found Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.208605 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-metrics-certs\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.244511 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"2d4b44323b47cdde41dc703804ae2564d57c7bcf91fd46ed7006b163b578f7cb"} Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.422999 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6"] Jan 28 11:35:58 crc kubenswrapper[4804]: W0128 11:35:58.434978 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ce00c89_f00d_43aa_9907_77bf331c3dbd.slice/crio-add055449512cae9b144681e999f78305ad7ab0fb5793ee5678368c16a9f71d1 WatchSource:0}: Error finding container add055449512cae9b144681e999f78305ad7ab0fb5793ee5678368c16a9f71d1: Status 404 returned error can't find the container with id add055449512cae9b144681e999f78305ad7ab0fb5793ee5678368c16a9f71d1 Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.449213 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.628423 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-rfhfx"] Jan 28 11:35:58 crc kubenswrapper[4804]: W0128 11:35:58.631762 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ae74e9e_799f_46bb_9a53_c8307c83203d.slice/crio-e83e83b2fa055e9cfd0cc4e55ae4cceb07ac58c2c75c252ab1ad546e70dd1c29 WatchSource:0}: Error finding container e83e83b2fa055e9cfd0cc4e55ae4cceb07ac58c2c75c252ab1ad546e70dd1c29: Status 404 returned error can't find the container with id e83e83b2fa055e9cfd0cc4e55ae4cceb07ac58c2c75c252ab1ad546e70dd1c29 Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.216729 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.224547 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.251199 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" event={"ID":"3ce00c89-f00d-43aa-9907-77bf331c3dbd","Type":"ContainerStarted","Data":"add055449512cae9b144681e999f78305ad7ab0fb5793ee5678368c16a9f71d1"} Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.253157 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rfhfx" event={"ID":"1ae74e9e-799f-46bb-9a53-c8307c83203d","Type":"ContainerStarted","Data":"6a98a7ce8ac14193a02df20d7d1bd2e536ce1f2aa1631767b910db1de13874b6"} Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.253205 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rfhfx" event={"ID":"1ae74e9e-799f-46bb-9a53-c8307c83203d","Type":"ContainerStarted","Data":"c4163e95b8b9b2f31e1575121524d0fa4e97b835e27075aa84519e2e1ebd1e06"} Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.253216 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rfhfx" event={"ID":"1ae74e9e-799f-46bb-9a53-c8307c83203d","Type":"ContainerStarted","Data":"e83e83b2fa055e9cfd0cc4e55ae4cceb07ac58c2c75c252ab1ad546e70dd1c29"} Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.254138 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.330118 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kcvj8" Jan 28 11:35:59 crc kubenswrapper[4804]: W0128 11:35:59.352151 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fa1df7e_03c8_4931_ad89_222acae36030.slice/crio-20e8062fcfe9f526601892fe7e9fde0121731f48bd50ada342bef63e383cb2a9 WatchSource:0}: Error finding container 20e8062fcfe9f526601892fe7e9fde0121731f48bd50ada342bef63e383cb2a9: Status 404 returned error can't find the container with id 20e8062fcfe9f526601892fe7e9fde0121731f48bd50ada342bef63e383cb2a9 Jan 28 11:36:00 crc kubenswrapper[4804]: I0128 11:36:00.262218 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kcvj8" event={"ID":"2fa1df7e-03c8-4931-ad89-222acae36030","Type":"ContainerStarted","Data":"921e284e3d9b659bbf8816201bf640abe8dcf1ed3a64b6a82af30835099c71c8"} Jan 28 11:36:00 crc kubenswrapper[4804]: I0128 11:36:00.262302 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kcvj8" event={"ID":"2fa1df7e-03c8-4931-ad89-222acae36030","Type":"ContainerStarted","Data":"280ceb91f3066f5cecc81e9a83bc4961a7ff321707905f8b5bbc6d5d048e400d"} Jan 28 11:36:00 crc kubenswrapper[4804]: I0128 11:36:00.262314 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kcvj8" event={"ID":"2fa1df7e-03c8-4931-ad89-222acae36030","Type":"ContainerStarted","Data":"20e8062fcfe9f526601892fe7e9fde0121731f48bd50ada342bef63e383cb2a9"} Jan 28 11:36:00 crc kubenswrapper[4804]: I0128 11:36:00.262455 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kcvj8" Jan 28 11:36:00 crc kubenswrapper[4804]: I0128 11:36:00.282485 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-rfhfx" podStartSLOduration=3.282463555 podStartE2EDuration="3.282463555s" podCreationTimestamp="2026-01-28 11:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:35:59.276236839 +0000 UTC m=+835.071116823" watchObservedRunningTime="2026-01-28 11:36:00.282463555 +0000 UTC m=+836.077343539" Jan 28 11:36:00 crc kubenswrapper[4804]: I0128 11:36:00.283833 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kcvj8" podStartSLOduration=3.283826831 podStartE2EDuration="3.283826831s" podCreationTimestamp="2026-01-28 11:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:36:00.280709138 +0000 UTC m=+836.075589122" watchObservedRunningTime="2026-01-28 11:36:00.283826831 +0000 UTC m=+836.078706815" Jan 28 11:36:05 crc kubenswrapper[4804]: I0128 11:36:05.309445 4804 generic.go:334] "Generic (PLEG): container finished" podID="45631116-4b02-448f-9158-18eaae682d9d" containerID="ede577a501563dd65541c3bce23272518eb5a3074520a28edc01707c0be6abde" exitCode=0 Jan 28 11:36:05 crc kubenswrapper[4804]: I0128 11:36:05.309492 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerDied","Data":"ede577a501563dd65541c3bce23272518eb5a3074520a28edc01707c0be6abde"} Jan 28 11:36:05 crc kubenswrapper[4804]: I0128 11:36:05.312986 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" event={"ID":"3ce00c89-f00d-43aa-9907-77bf331c3dbd","Type":"ContainerStarted","Data":"1e62a4e66b969a2eeb364d6821e827820336e596ec172f2870ec6fd5370de40d"} Jan 28 11:36:05 crc kubenswrapper[4804]: I0128 11:36:05.313354 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:36:05 crc kubenswrapper[4804]: I0128 11:36:05.352241 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" podStartSLOduration=2.003251181 podStartE2EDuration="8.352215678s" podCreationTimestamp="2026-01-28 11:35:57 +0000 UTC" firstStartedPulling="2026-01-28 11:35:58.437253217 +0000 UTC m=+834.232133201" lastFinishedPulling="2026-01-28 11:36:04.786217724 +0000 UTC m=+840.581097698" observedRunningTime="2026-01-28 11:36:05.351689122 +0000 UTC m=+841.146569106" watchObservedRunningTime="2026-01-28 11:36:05.352215678 +0000 UTC m=+841.147095672" Jan 28 11:36:06 crc kubenswrapper[4804]: I0128 11:36:06.321394 4804 generic.go:334] "Generic (PLEG): container finished" podID="45631116-4b02-448f-9158-18eaae682d9d" containerID="6b15248246c605c8c463dd7f6c1e1b35ed2c356a3679882be60c7d698ddebd5d" exitCode=0 Jan 28 11:36:06 crc kubenswrapper[4804]: I0128 11:36:06.322097 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerDied","Data":"6b15248246c605c8c463dd7f6c1e1b35ed2c356a3679882be60c7d698ddebd5d"} Jan 28 11:36:07 crc kubenswrapper[4804]: I0128 11:36:07.336760 4804 generic.go:334] "Generic (PLEG): container finished" podID="45631116-4b02-448f-9158-18eaae682d9d" containerID="c5c853034a4df40a34754ab55b0d3ab5bc52a7abb8ba8d9a849712955716ea6d" exitCode=0 Jan 28 11:36:07 crc kubenswrapper[4804]: I0128 11:36:07.336840 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerDied","Data":"c5c853034a4df40a34754ab55b0d3ab5bc52a7abb8ba8d9a849712955716ea6d"} Jan 28 11:36:08 crc kubenswrapper[4804]: I0128 11:36:08.346108 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"58337462fb4eeaca49ad37e0df7080d673d6d074b8175c194972c8a1ff44fd59"} Jan 28 11:36:08 crc kubenswrapper[4804]: I0128 11:36:08.346401 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"f543bf86db90a3d2ce30beb92894630f79a6096e895364cc36da1cb382842757"} Jan 28 11:36:08 crc kubenswrapper[4804]: I0128 11:36:08.346411 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"6011dc90411ae479974801023fdc600ffd10bab42a58bac5f5d5dc6c83e12955"} Jan 28 11:36:08 crc kubenswrapper[4804]: I0128 11:36:08.453876 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:36:09 crc kubenswrapper[4804]: I0128 11:36:09.338635 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kcvj8" Jan 28 11:36:09 crc kubenswrapper[4804]: I0128 11:36:09.356264 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"650e496c663f1426d1d86565d331dab6640049b8952d0c419bc1d9d5110c5396"} Jan 28 11:36:09 crc kubenswrapper[4804]: I0128 11:36:09.356324 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"7055e999a484a7cec751a69f9b7db66f5e57dc33d929f7107814450d261a314e"} Jan 28 11:36:09 crc kubenswrapper[4804]: I0128 11:36:09.356343 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"344cea65acad0cb986c6dc932bea04a96128153e6cb96fa687eaecd709be2622"} Jan 28 11:36:09 crc kubenswrapper[4804]: I0128 11:36:09.356631 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:36:09 crc kubenswrapper[4804]: I0128 11:36:09.389020 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5kdlz" podStartSLOduration=5.524881829 podStartE2EDuration="12.388988108s" podCreationTimestamp="2026-01-28 11:35:57 +0000 UTC" firstStartedPulling="2026-01-28 11:35:57.915897911 +0000 UTC m=+833.710777895" lastFinishedPulling="2026-01-28 11:36:04.78000419 +0000 UTC m=+840.574884174" observedRunningTime="2026-01-28 11:36:09.387255112 +0000 UTC m=+845.182135106" watchObservedRunningTime="2026-01-28 11:36:09.388988108 +0000 UTC m=+845.183868162" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.062002 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6"] Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.063113 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.070886 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.085560 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6"] Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.202730 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.202812 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.202925 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7psz\" (UniqueName: \"kubernetes.io/projected/237e3a43-08f5-4b3c-864f-d5f90276bac3-kube-api-access-j7psz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.304691 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7psz\" (UniqueName: \"kubernetes.io/projected/237e3a43-08f5-4b3c-864f-d5f90276bac3-kube-api-access-j7psz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.304768 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.304810 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.305368 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.305389 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.341712 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7psz\" (UniqueName: \"kubernetes.io/projected/237e3a43-08f5-4b3c-864f-d5f90276bac3-kube-api-access-j7psz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.389444 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.611929 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6"] Jan 28 11:36:12 crc kubenswrapper[4804]: I0128 11:36:12.383642 4804 generic.go:334] "Generic (PLEG): container finished" podID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerID="7756bceb2367830456ddd3c06e76b6e7a3fb386504ae8ce9d485c354f3ef9ad4" exitCode=0 Jan 28 11:36:12 crc kubenswrapper[4804]: I0128 11:36:12.383716 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" event={"ID":"237e3a43-08f5-4b3c-864f-d5f90276bac3","Type":"ContainerDied","Data":"7756bceb2367830456ddd3c06e76b6e7a3fb386504ae8ce9d485c354f3ef9ad4"} Jan 28 11:36:12 crc kubenswrapper[4804]: I0128 11:36:12.384120 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" event={"ID":"237e3a43-08f5-4b3c-864f-d5f90276bac3","Type":"ContainerStarted","Data":"c063c8e9f123410dd7cf39cd3b431953363b9ca9a5df4c05e11bc516660a52c4"} Jan 28 11:36:12 crc kubenswrapper[4804]: I0128 11:36:12.762085 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:36:12 crc kubenswrapper[4804]: I0128 11:36:12.804403 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:36:17 crc kubenswrapper[4804]: I0128 11:36:17.764425 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:36:18 crc kubenswrapper[4804]: I0128 11:36:18.049680 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:36:19 crc kubenswrapper[4804]: I0128 11:36:19.448141 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" event={"ID":"237e3a43-08f5-4b3c-864f-d5f90276bac3","Type":"ContainerStarted","Data":"2aab0e70d1c51d20c2619151f526eb844d17b978419b20a7f3f0c30e8e80372c"} Jan 28 11:36:20 crc kubenswrapper[4804]: I0128 11:36:20.456317 4804 generic.go:334] "Generic (PLEG): container finished" podID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerID="2aab0e70d1c51d20c2619151f526eb844d17b978419b20a7f3f0c30e8e80372c" exitCode=0 Jan 28 11:36:20 crc kubenswrapper[4804]: I0128 11:36:20.456447 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" event={"ID":"237e3a43-08f5-4b3c-864f-d5f90276bac3","Type":"ContainerDied","Data":"2aab0e70d1c51d20c2619151f526eb844d17b978419b20a7f3f0c30e8e80372c"} Jan 28 11:36:21 crc kubenswrapper[4804]: I0128 11:36:21.468076 4804 generic.go:334] "Generic (PLEG): container finished" podID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerID="eec4f650f94b2fd54830ba513210773442ce9f43933229f2a03fb53026c41e83" exitCode=0 Jan 28 11:36:21 crc kubenswrapper[4804]: I0128 11:36:21.468143 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" event={"ID":"237e3a43-08f5-4b3c-864f-d5f90276bac3","Type":"ContainerDied","Data":"eec4f650f94b2fd54830ba513210773442ce9f43933229f2a03fb53026c41e83"} Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.771262 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.893014 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-util\") pod \"237e3a43-08f5-4b3c-864f-d5f90276bac3\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.893092 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7psz\" (UniqueName: \"kubernetes.io/projected/237e3a43-08f5-4b3c-864f-d5f90276bac3-kube-api-access-j7psz\") pod \"237e3a43-08f5-4b3c-864f-d5f90276bac3\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.893216 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-bundle\") pod \"237e3a43-08f5-4b3c-864f-d5f90276bac3\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.894095 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-bundle" (OuterVolumeSpecName: "bundle") pod "237e3a43-08f5-4b3c-864f-d5f90276bac3" (UID: "237e3a43-08f5-4b3c-864f-d5f90276bac3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.904357 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-util" (OuterVolumeSpecName: "util") pod "237e3a43-08f5-4b3c-864f-d5f90276bac3" (UID: "237e3a43-08f5-4b3c-864f-d5f90276bac3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.906520 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237e3a43-08f5-4b3c-864f-d5f90276bac3-kube-api-access-j7psz" (OuterVolumeSpecName: "kube-api-access-j7psz") pod "237e3a43-08f5-4b3c-864f-d5f90276bac3" (UID: "237e3a43-08f5-4b3c-864f-d5f90276bac3"). InnerVolumeSpecName "kube-api-access-j7psz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.995148 4804 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-util\") on node \"crc\" DevicePath \"\"" Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.995309 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7psz\" (UniqueName: \"kubernetes.io/projected/237e3a43-08f5-4b3c-864f-d5f90276bac3-kube-api-access-j7psz\") on node \"crc\" DevicePath \"\"" Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.995382 4804 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:36:23 crc kubenswrapper[4804]: I0128 11:36:23.483190 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" event={"ID":"237e3a43-08f5-4b3c-864f-d5f90276bac3","Type":"ContainerDied","Data":"c063c8e9f123410dd7cf39cd3b431953363b9ca9a5df4c05e11bc516660a52c4"} Jan 28 11:36:23 crc kubenswrapper[4804]: I0128 11:36:23.483227 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c063c8e9f123410dd7cf39cd3b431953363b9ca9a5df4c05e11bc516660a52c4" Jan 28 11:36:23 crc kubenswrapper[4804]: I0128 11:36:23.483250 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.378477 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl"] Jan 28 11:36:29 crc kubenswrapper[4804]: E0128 11:36:29.379503 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="util" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.379519 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="util" Jan 28 11:36:29 crc kubenswrapper[4804]: E0128 11:36:29.379544 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="pull" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.379552 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="pull" Jan 28 11:36:29 crc kubenswrapper[4804]: E0128 11:36:29.379563 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="extract" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.379571 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="extract" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.379681 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="extract" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.380295 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.385866 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.386044 4804 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-98bvm" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.386241 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.393088 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl"] Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.479112 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af27b36c-f1e1-492e-9b04-3ad941908789-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nhqfl\" (UID: \"af27b36c-f1e1-492e-9b04-3ad941908789\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.479201 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4h29\" (UniqueName: \"kubernetes.io/projected/af27b36c-f1e1-492e-9b04-3ad941908789-kube-api-access-l4h29\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nhqfl\" (UID: \"af27b36c-f1e1-492e-9b04-3ad941908789\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.580432 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4h29\" (UniqueName: \"kubernetes.io/projected/af27b36c-f1e1-492e-9b04-3ad941908789-kube-api-access-l4h29\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nhqfl\" (UID: \"af27b36c-f1e1-492e-9b04-3ad941908789\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.580547 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af27b36c-f1e1-492e-9b04-3ad941908789-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nhqfl\" (UID: \"af27b36c-f1e1-492e-9b04-3ad941908789\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.581086 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af27b36c-f1e1-492e-9b04-3ad941908789-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nhqfl\" (UID: \"af27b36c-f1e1-492e-9b04-3ad941908789\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.607417 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4h29\" (UniqueName: \"kubernetes.io/projected/af27b36c-f1e1-492e-9b04-3ad941908789-kube-api-access-l4h29\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nhqfl\" (UID: \"af27b36c-f1e1-492e-9b04-3ad941908789\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.698812 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:30 crc kubenswrapper[4804]: I0128 11:36:30.158859 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl"] Jan 28 11:36:30 crc kubenswrapper[4804]: I0128 11:36:30.524579 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" event={"ID":"af27b36c-f1e1-492e-9b04-3ad941908789","Type":"ContainerStarted","Data":"8f425d6d502752a9d6d0692a0818820c3fc46cdcd8fc11d34568aaa510ca26ad"} Jan 28 11:36:52 crc kubenswrapper[4804]: I0128 11:36:52.676005 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" event={"ID":"af27b36c-f1e1-492e-9b04-3ad941908789","Type":"ContainerStarted","Data":"92c1db0255f69e1776c4401b6f4bef74a564dd481eae0872024fac22b4bbac3e"} Jan 28 11:36:52 crc kubenswrapper[4804]: I0128 11:36:52.709249 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" podStartSLOduration=2.095180022 podStartE2EDuration="23.709230109s" podCreationTimestamp="2026-01-28 11:36:29 +0000 UTC" firstStartedPulling="2026-01-28 11:36:30.170301025 +0000 UTC m=+865.965181009" lastFinishedPulling="2026-01-28 11:36:51.784351102 +0000 UTC m=+887.579231096" observedRunningTime="2026-01-28 11:36:52.704440554 +0000 UTC m=+888.499320568" watchObservedRunningTime="2026-01-28 11:36:52.709230109 +0000 UTC m=+888.504110103" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.532010 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-cjsz8"] Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.533290 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.535452 4804 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-z6tks" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.536013 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.542533 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-cjsz8"] Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.543235 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.606499 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5d5z\" (UniqueName: \"kubernetes.io/projected/dd7c8a18-36d1-45d5-aaf5-daff9b218438-kube-api-access-n5d5z\") pod \"cert-manager-webhook-f4fb5df64-cjsz8\" (UID: \"dd7c8a18-36d1-45d5-aaf5-daff9b218438\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.606607 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd7c8a18-36d1-45d5-aaf5-daff9b218438-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-cjsz8\" (UID: \"dd7c8a18-36d1-45d5-aaf5-daff9b218438\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.707935 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd7c8a18-36d1-45d5-aaf5-daff9b218438-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-cjsz8\" (UID: \"dd7c8a18-36d1-45d5-aaf5-daff9b218438\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.708001 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5d5z\" (UniqueName: \"kubernetes.io/projected/dd7c8a18-36d1-45d5-aaf5-daff9b218438-kube-api-access-n5d5z\") pod \"cert-manager-webhook-f4fb5df64-cjsz8\" (UID: \"dd7c8a18-36d1-45d5-aaf5-daff9b218438\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.731544 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5d5z\" (UniqueName: \"kubernetes.io/projected/dd7c8a18-36d1-45d5-aaf5-daff9b218438-kube-api-access-n5d5z\") pod \"cert-manager-webhook-f4fb5df64-cjsz8\" (UID: \"dd7c8a18-36d1-45d5-aaf5-daff9b218438\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.752430 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd7c8a18-36d1-45d5-aaf5-daff9b218438-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-cjsz8\" (UID: \"dd7c8a18-36d1-45d5-aaf5-daff9b218438\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.854916 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:55 crc kubenswrapper[4804]: I0128 11:36:55.385552 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-cjsz8"] Jan 28 11:36:55 crc kubenswrapper[4804]: I0128 11:36:55.692288 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" event={"ID":"dd7c8a18-36d1-45d5-aaf5-daff9b218438","Type":"ContainerStarted","Data":"881073ac4e29852f574bf1e62185b39972539c646588b9ba86aa84c0868d0382"} Jan 28 11:36:56 crc kubenswrapper[4804]: I0128 11:36:56.857816 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-pgj92"] Jan 28 11:36:56 crc kubenswrapper[4804]: I0128 11:36:56.859172 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:56 crc kubenswrapper[4804]: I0128 11:36:56.861088 4804 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wfvth" Jan 28 11:36:56 crc kubenswrapper[4804]: I0128 11:36:56.879267 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-pgj92"] Jan 28 11:36:56 crc kubenswrapper[4804]: I0128 11:36:56.947466 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ggln\" (UniqueName: \"kubernetes.io/projected/47a0c933-7194-403d-8345-446cc9941fa5-kube-api-access-4ggln\") pod \"cert-manager-cainjector-855d9ccff4-pgj92\" (UID: \"47a0c933-7194-403d-8345-446cc9941fa5\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:56 crc kubenswrapper[4804]: I0128 11:36:56.947686 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47a0c933-7194-403d-8345-446cc9941fa5-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-pgj92\" (UID: \"47a0c933-7194-403d-8345-446cc9941fa5\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.048869 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ggln\" (UniqueName: \"kubernetes.io/projected/47a0c933-7194-403d-8345-446cc9941fa5-kube-api-access-4ggln\") pod \"cert-manager-cainjector-855d9ccff4-pgj92\" (UID: \"47a0c933-7194-403d-8345-446cc9941fa5\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.048980 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47a0c933-7194-403d-8345-446cc9941fa5-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-pgj92\" (UID: \"47a0c933-7194-403d-8345-446cc9941fa5\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.079207 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ggln\" (UniqueName: \"kubernetes.io/projected/47a0c933-7194-403d-8345-446cc9941fa5-kube-api-access-4ggln\") pod \"cert-manager-cainjector-855d9ccff4-pgj92\" (UID: \"47a0c933-7194-403d-8345-446cc9941fa5\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.081708 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47a0c933-7194-403d-8345-446cc9941fa5-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-pgj92\" (UID: \"47a0c933-7194-403d-8345-446cc9941fa5\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.180620 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.631704 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-pgj92"] Jan 28 11:36:57 crc kubenswrapper[4804]: W0128 11:36:57.640236 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47a0c933_7194_403d_8345_446cc9941fa5.slice/crio-020e830cecdf8fc12d6a7426c472cc31ff9ed052491772cd21682d2a33f62e8a WatchSource:0}: Error finding container 020e830cecdf8fc12d6a7426c472cc31ff9ed052491772cd21682d2a33f62e8a: Status 404 returned error can't find the container with id 020e830cecdf8fc12d6a7426c472cc31ff9ed052491772cd21682d2a33f62e8a Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.709549 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" event={"ID":"47a0c933-7194-403d-8345-446cc9941fa5","Type":"ContainerStarted","Data":"020e830cecdf8fc12d6a7426c472cc31ff9ed052491772cd21682d2a33f62e8a"} Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.618268 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6lmzp"] Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.619937 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.631478 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lmzp"] Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.706486 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-utilities\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.706571 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-catalog-content\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.706626 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxhzf\" (UniqueName: \"kubernetes.io/projected/87376851-1792-4b24-bc20-c87628a93a38-kube-api-access-mxhzf\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.807617 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxhzf\" (UniqueName: \"kubernetes.io/projected/87376851-1792-4b24-bc20-c87628a93a38-kube-api-access-mxhzf\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.807752 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-utilities\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.807833 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-catalog-content\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.808253 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-utilities\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.808322 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-catalog-content\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.826587 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxhzf\" (UniqueName: \"kubernetes.io/projected/87376851-1792-4b24-bc20-c87628a93a38-kube-api-access-mxhzf\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.949032 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:03 crc kubenswrapper[4804]: I0128 11:37:03.456455 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lmzp"] Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.751907 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" event={"ID":"dd7c8a18-36d1-45d5-aaf5-daff9b218438","Type":"ContainerStarted","Data":"a0f93ecfab39a23449a9fa3d174d5b6f39095d0d316bca435ba12fb8588de85e"} Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.752234 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.754487 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" event={"ID":"47a0c933-7194-403d-8345-446cc9941fa5","Type":"ContainerStarted","Data":"d52e56df8738f57e09c7669efa58b00e365fe1cc5fca928e26b7fb002d00e1fc"} Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.757049 4804 generic.go:334] "Generic (PLEG): container finished" podID="87376851-1792-4b24-bc20-c87628a93a38" containerID="e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c" exitCode=0 Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.757129 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lmzp" event={"ID":"87376851-1792-4b24-bc20-c87628a93a38","Type":"ContainerDied","Data":"e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c"} Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.757169 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lmzp" event={"ID":"87376851-1792-4b24-bc20-c87628a93a38","Type":"ContainerStarted","Data":"1c1607d09c480ca59ed3afc9076abed7f9c2ddd81e561ad505762d69e617b5eb"} Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.771282 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" podStartSLOduration=2.17287153 podStartE2EDuration="10.771263041s" podCreationTimestamp="2026-01-28 11:36:54 +0000 UTC" firstStartedPulling="2026-01-28 11:36:55.39972256 +0000 UTC m=+891.194602544" lastFinishedPulling="2026-01-28 11:37:03.998114071 +0000 UTC m=+899.792994055" observedRunningTime="2026-01-28 11:37:04.768696319 +0000 UTC m=+900.563576303" watchObservedRunningTime="2026-01-28 11:37:04.771263041 +0000 UTC m=+900.566143025" Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.808248 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" podStartSLOduration=2.420131205 podStartE2EDuration="8.808222952s" podCreationTimestamp="2026-01-28 11:36:56 +0000 UTC" firstStartedPulling="2026-01-28 11:36:57.645497087 +0000 UTC m=+893.440377071" lastFinishedPulling="2026-01-28 11:37:04.033588834 +0000 UTC m=+899.828468818" observedRunningTime="2026-01-28 11:37:04.800607927 +0000 UTC m=+900.595487911" watchObservedRunningTime="2026-01-28 11:37:04.808222952 +0000 UTC m=+900.603102976" Jan 28 11:37:06 crc kubenswrapper[4804]: I0128 11:37:06.772060 4804 generic.go:334] "Generic (PLEG): container finished" podID="87376851-1792-4b24-bc20-c87628a93a38" containerID="567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88" exitCode=0 Jan 28 11:37:06 crc kubenswrapper[4804]: I0128 11:37:06.772135 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lmzp" event={"ID":"87376851-1792-4b24-bc20-c87628a93a38","Type":"ContainerDied","Data":"567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88"} Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.502438 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-hkwds"] Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.503449 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.508068 4804 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-w2npd" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.510246 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-hkwds"] Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.598426 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4da2c74c-883d-4690-bb94-a34b198ccf89-bound-sa-token\") pod \"cert-manager-86cb77c54b-hkwds\" (UID: \"4da2c74c-883d-4690-bb94-a34b198ccf89\") " pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.598527 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26qkd\" (UniqueName: \"kubernetes.io/projected/4da2c74c-883d-4690-bb94-a34b198ccf89-kube-api-access-26qkd\") pod \"cert-manager-86cb77c54b-hkwds\" (UID: \"4da2c74c-883d-4690-bb94-a34b198ccf89\") " pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.699464 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26qkd\" (UniqueName: \"kubernetes.io/projected/4da2c74c-883d-4690-bb94-a34b198ccf89-kube-api-access-26qkd\") pod \"cert-manager-86cb77c54b-hkwds\" (UID: \"4da2c74c-883d-4690-bb94-a34b198ccf89\") " pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.699588 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4da2c74c-883d-4690-bb94-a34b198ccf89-bound-sa-token\") pod \"cert-manager-86cb77c54b-hkwds\" (UID: \"4da2c74c-883d-4690-bb94-a34b198ccf89\") " pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.720692 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4da2c74c-883d-4690-bb94-a34b198ccf89-bound-sa-token\") pod \"cert-manager-86cb77c54b-hkwds\" (UID: \"4da2c74c-883d-4690-bb94-a34b198ccf89\") " pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.720939 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26qkd\" (UniqueName: \"kubernetes.io/projected/4da2c74c-883d-4690-bb94-a34b198ccf89-kube-api-access-26qkd\") pod \"cert-manager-86cb77c54b-hkwds\" (UID: \"4da2c74c-883d-4690-bb94-a34b198ccf89\") " pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.783647 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lmzp" event={"ID":"87376851-1792-4b24-bc20-c87628a93a38","Type":"ContainerStarted","Data":"cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd"} Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.807979 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6lmzp" podStartSLOduration=4.188676745 podStartE2EDuration="6.807959942s" podCreationTimestamp="2026-01-28 11:37:01 +0000 UTC" firstStartedPulling="2026-01-28 11:37:04.758186461 +0000 UTC m=+900.553066445" lastFinishedPulling="2026-01-28 11:37:07.377469648 +0000 UTC m=+903.172349642" observedRunningTime="2026-01-28 11:37:07.803099896 +0000 UTC m=+903.597979880" watchObservedRunningTime="2026-01-28 11:37:07.807959942 +0000 UTC m=+903.602839926" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.823264 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:08 crc kubenswrapper[4804]: I0128 11:37:08.104722 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-hkwds"] Jan 28 11:37:08 crc kubenswrapper[4804]: I0128 11:37:08.791113 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-hkwds" event={"ID":"4da2c74c-883d-4690-bb94-a34b198ccf89","Type":"ContainerStarted","Data":"192d106a15fd447a9319e49bd02a3c7368a5b41e2631d8be6d296ba0ec46acc3"} Jan 28 11:37:08 crc kubenswrapper[4804]: I0128 11:37:08.791465 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-hkwds" event={"ID":"4da2c74c-883d-4690-bb94-a34b198ccf89","Type":"ContainerStarted","Data":"d389fd4a1afc18455e821132fefac4ed2ebc9ca4ed1818da43afbdce6d58ae65"} Jan 28 11:37:09 crc kubenswrapper[4804]: I0128 11:37:09.858747 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:37:09 crc kubenswrapper[4804]: I0128 11:37:09.878402 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-hkwds" podStartSLOduration=2.878369222 podStartE2EDuration="2.878369222s" podCreationTimestamp="2026-01-28 11:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:37:08.808051921 +0000 UTC m=+904.602931915" watchObservedRunningTime="2026-01-28 11:37:09.878369222 +0000 UTC m=+905.673249206" Jan 28 11:37:11 crc kubenswrapper[4804]: I0128 11:37:11.949551 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:11 crc kubenswrapper[4804]: I0128 11:37:11.949600 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:11 crc kubenswrapper[4804]: I0128 11:37:11.992686 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:12 crc kubenswrapper[4804]: I0128 11:37:12.582388 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:37:12 crc kubenswrapper[4804]: I0128 11:37:12.582457 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:37:12 crc kubenswrapper[4804]: I0128 11:37:12.850119 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:12 crc kubenswrapper[4804]: I0128 11:37:12.893355 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lmzp"] Jan 28 11:37:14 crc kubenswrapper[4804]: I0128 11:37:14.825738 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6lmzp" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="registry-server" containerID="cri-o://cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd" gracePeriod=2 Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.793429 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.841092 4804 generic.go:334] "Generic (PLEG): container finished" podID="87376851-1792-4b24-bc20-c87628a93a38" containerID="cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd" exitCode=0 Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.841155 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lmzp" event={"ID":"87376851-1792-4b24-bc20-c87628a93a38","Type":"ContainerDied","Data":"cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd"} Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.841196 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lmzp" event={"ID":"87376851-1792-4b24-bc20-c87628a93a38","Type":"ContainerDied","Data":"1c1607d09c480ca59ed3afc9076abed7f9c2ddd81e561ad505762d69e617b5eb"} Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.841221 4804 scope.go:117] "RemoveContainer" containerID="cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.841231 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.868553 4804 scope.go:117] "RemoveContainer" containerID="567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.894377 4804 scope.go:117] "RemoveContainer" containerID="e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.908230 4804 scope.go:117] "RemoveContainer" containerID="cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd" Jan 28 11:37:15 crc kubenswrapper[4804]: E0128 11:37:15.908731 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd\": container with ID starting with cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd not found: ID does not exist" containerID="cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.908777 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd"} err="failed to get container status \"cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd\": rpc error: code = NotFound desc = could not find container \"cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd\": container with ID starting with cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd not found: ID does not exist" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.908804 4804 scope.go:117] "RemoveContainer" containerID="567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88" Jan 28 11:37:15 crc kubenswrapper[4804]: E0128 11:37:15.909110 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88\": container with ID starting with 567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88 not found: ID does not exist" containerID="567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.909148 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88"} err="failed to get container status \"567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88\": rpc error: code = NotFound desc = could not find container \"567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88\": container with ID starting with 567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88 not found: ID does not exist" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.909174 4804 scope.go:117] "RemoveContainer" containerID="e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c" Jan 28 11:37:15 crc kubenswrapper[4804]: E0128 11:37:15.909577 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c\": container with ID starting with e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c not found: ID does not exist" containerID="e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.909621 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c"} err="failed to get container status \"e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c\": rpc error: code = NotFound desc = could not find container \"e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c\": container with ID starting with e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c not found: ID does not exist" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.911077 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxhzf\" (UniqueName: \"kubernetes.io/projected/87376851-1792-4b24-bc20-c87628a93a38-kube-api-access-mxhzf\") pod \"87376851-1792-4b24-bc20-c87628a93a38\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.911148 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-utilities\") pod \"87376851-1792-4b24-bc20-c87628a93a38\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.911177 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-catalog-content\") pod \"87376851-1792-4b24-bc20-c87628a93a38\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.912467 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-utilities" (OuterVolumeSpecName: "utilities") pod "87376851-1792-4b24-bc20-c87628a93a38" (UID: "87376851-1792-4b24-bc20-c87628a93a38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.916938 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87376851-1792-4b24-bc20-c87628a93a38-kube-api-access-mxhzf" (OuterVolumeSpecName: "kube-api-access-mxhzf") pod "87376851-1792-4b24-bc20-c87628a93a38" (UID: "87376851-1792-4b24-bc20-c87628a93a38"). InnerVolumeSpecName "kube-api-access-mxhzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.939927 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87376851-1792-4b24-bc20-c87628a93a38" (UID: "87376851-1792-4b24-bc20-c87628a93a38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.012384 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.012423 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxhzf\" (UniqueName: \"kubernetes.io/projected/87376851-1792-4b24-bc20-c87628a93a38-kube-api-access-mxhzf\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.012440 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.031377 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nslxx"] Jan 28 11:37:16 crc kubenswrapper[4804]: E0128 11:37:16.031741 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="extract-utilities" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.031764 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="extract-utilities" Jan 28 11:37:16 crc kubenswrapper[4804]: E0128 11:37:16.031778 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="registry-server" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.031790 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="registry-server" Jan 28 11:37:16 crc kubenswrapper[4804]: E0128 11:37:16.031823 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="extract-content" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.031835 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="extract-content" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.032043 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="registry-server" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.032855 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.034605 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-tf9rf" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.037055 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.037154 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.040806 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nslxx"] Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.171619 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lmzp"] Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.175905 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lmzp"] Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.213766 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tflf6\" (UniqueName: \"kubernetes.io/projected/905df814-fa43-4ef1-b5e6-cfa26ec65547-kube-api-access-tflf6\") pod \"openstack-operator-index-nslxx\" (UID: \"905df814-fa43-4ef1-b5e6-cfa26ec65547\") " pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:16 crc kubenswrapper[4804]: E0128 11:37:16.236110 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87376851_1792_4b24_bc20_c87628a93a38.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87376851_1792_4b24_bc20_c87628a93a38.slice/crio-1c1607d09c480ca59ed3afc9076abed7f9c2ddd81e561ad505762d69e617b5eb\": RecentStats: unable to find data in memory cache]" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.315697 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tflf6\" (UniqueName: \"kubernetes.io/projected/905df814-fa43-4ef1-b5e6-cfa26ec65547-kube-api-access-tflf6\") pod \"openstack-operator-index-nslxx\" (UID: \"905df814-fa43-4ef1-b5e6-cfa26ec65547\") " pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.335610 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tflf6\" (UniqueName: \"kubernetes.io/projected/905df814-fa43-4ef1-b5e6-cfa26ec65547-kube-api-access-tflf6\") pod \"openstack-operator-index-nslxx\" (UID: \"905df814-fa43-4ef1-b5e6-cfa26ec65547\") " pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.353255 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.530272 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nslxx"] Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.850739 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nslxx" event={"ID":"905df814-fa43-4ef1-b5e6-cfa26ec65547","Type":"ContainerStarted","Data":"09f757a13b0e21cb48db3d2e18b2c22548dbb1ce8278a04562fd84b33b66a1a7"} Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.925692 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87376851-1792-4b24-bc20-c87628a93a38" path="/var/lib/kubelet/pods/87376851-1792-4b24-bc20-c87628a93a38/volumes" Jan 28 11:37:21 crc kubenswrapper[4804]: I0128 11:37:21.229794 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nslxx"] Jan 28 11:37:21 crc kubenswrapper[4804]: I0128 11:37:21.837015 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cmjpc"] Jan 28 11:37:21 crc kubenswrapper[4804]: I0128 11:37:21.838239 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:21 crc kubenswrapper[4804]: I0128 11:37:21.846525 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cmjpc"] Jan 28 11:37:21 crc kubenswrapper[4804]: I0128 11:37:21.911308 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjsjm\" (UniqueName: \"kubernetes.io/projected/d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec-kube-api-access-pjsjm\") pod \"openstack-operator-index-cmjpc\" (UID: \"d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec\") " pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:22 crc kubenswrapper[4804]: I0128 11:37:22.012784 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjsjm\" (UniqueName: \"kubernetes.io/projected/d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec-kube-api-access-pjsjm\") pod \"openstack-operator-index-cmjpc\" (UID: \"d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec\") " pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:22 crc kubenswrapper[4804]: I0128 11:37:22.031278 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjsjm\" (UniqueName: \"kubernetes.io/projected/d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec-kube-api-access-pjsjm\") pod \"openstack-operator-index-cmjpc\" (UID: \"d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec\") " pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:22 crc kubenswrapper[4804]: I0128 11:37:22.170296 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:22 crc kubenswrapper[4804]: I0128 11:37:22.391579 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cmjpc"] Jan 28 11:37:22 crc kubenswrapper[4804]: I0128 11:37:22.902908 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cmjpc" event={"ID":"d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec","Type":"ContainerStarted","Data":"e91bc7fa7fe8a5ca3b7b1c3727492ed51bf9fa2a650d08ef0767845204bbb9ad"} Jan 28 11:37:32 crc kubenswrapper[4804]: I0128 11:37:32.962875 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cmjpc" event={"ID":"d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec","Type":"ContainerStarted","Data":"0908221b06e72200159b250f3b375731ea9d3f075be6e2dacff8316f87d4000a"} Jan 28 11:37:32 crc kubenswrapper[4804]: I0128 11:37:32.965112 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nslxx" event={"ID":"905df814-fa43-4ef1-b5e6-cfa26ec65547","Type":"ContainerStarted","Data":"ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5"} Jan 28 11:37:32 crc kubenswrapper[4804]: I0128 11:37:32.965324 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-nslxx" podUID="905df814-fa43-4ef1-b5e6-cfa26ec65547" containerName="registry-server" containerID="cri-o://ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5" gracePeriod=2 Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.000047 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nslxx" podStartSLOduration=1.059743727 podStartE2EDuration="17.000026223s" podCreationTimestamp="2026-01-28 11:37:16 +0000 UTC" firstStartedPulling="2026-01-28 11:37:16.538866182 +0000 UTC m=+912.333746176" lastFinishedPulling="2026-01-28 11:37:32.479148688 +0000 UTC m=+928.274028672" observedRunningTime="2026-01-28 11:37:32.996447797 +0000 UTC m=+928.791327781" watchObservedRunningTime="2026-01-28 11:37:33.000026223 +0000 UTC m=+928.794906227" Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.005233 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cmjpc" podStartSLOduration=1.9458477159999998 podStartE2EDuration="12.00520966s" podCreationTimestamp="2026-01-28 11:37:21 +0000 UTC" firstStartedPulling="2026-01-28 11:37:22.404028726 +0000 UTC m=+918.198908710" lastFinishedPulling="2026-01-28 11:37:32.46339067 +0000 UTC m=+928.258270654" observedRunningTime="2026-01-28 11:37:32.983551752 +0000 UTC m=+928.778431736" watchObservedRunningTime="2026-01-28 11:37:33.00520966 +0000 UTC m=+928.800089644" Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.402895 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.572184 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tflf6\" (UniqueName: \"kubernetes.io/projected/905df814-fa43-4ef1-b5e6-cfa26ec65547-kube-api-access-tflf6\") pod \"905df814-fa43-4ef1-b5e6-cfa26ec65547\" (UID: \"905df814-fa43-4ef1-b5e6-cfa26ec65547\") " Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.578150 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905df814-fa43-4ef1-b5e6-cfa26ec65547-kube-api-access-tflf6" (OuterVolumeSpecName: "kube-api-access-tflf6") pod "905df814-fa43-4ef1-b5e6-cfa26ec65547" (UID: "905df814-fa43-4ef1-b5e6-cfa26ec65547"). InnerVolumeSpecName "kube-api-access-tflf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.673866 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tflf6\" (UniqueName: \"kubernetes.io/projected/905df814-fa43-4ef1-b5e6-cfa26ec65547-kube-api-access-tflf6\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.973321 4804 generic.go:334] "Generic (PLEG): container finished" podID="905df814-fa43-4ef1-b5e6-cfa26ec65547" containerID="ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5" exitCode=0 Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.973387 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.973419 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nslxx" event={"ID":"905df814-fa43-4ef1-b5e6-cfa26ec65547","Type":"ContainerDied","Data":"ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5"} Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.973446 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nslxx" event={"ID":"905df814-fa43-4ef1-b5e6-cfa26ec65547","Type":"ContainerDied","Data":"09f757a13b0e21cb48db3d2e18b2c22548dbb1ce8278a04562fd84b33b66a1a7"} Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.973462 4804 scope.go:117] "RemoveContainer" containerID="ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5" Jan 28 11:37:34 crc kubenswrapper[4804]: I0128 11:37:34.039497 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nslxx"] Jan 28 11:37:34 crc kubenswrapper[4804]: I0128 11:37:34.042740 4804 scope.go:117] "RemoveContainer" containerID="ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5" Jan 28 11:37:34 crc kubenswrapper[4804]: E0128 11:37:34.043381 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5\": container with ID starting with ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5 not found: ID does not exist" containerID="ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5" Jan 28 11:37:34 crc kubenswrapper[4804]: I0128 11:37:34.043420 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5"} err="failed to get container status \"ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5\": rpc error: code = NotFound desc = could not find container \"ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5\": container with ID starting with ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5 not found: ID does not exist" Jan 28 11:37:34 crc kubenswrapper[4804]: I0128 11:37:34.044832 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-nslxx"] Jan 28 11:37:34 crc kubenswrapper[4804]: I0128 11:37:34.922318 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="905df814-fa43-4ef1-b5e6-cfa26ec65547" path="/var/lib/kubelet/pods/905df814-fa43-4ef1-b5e6-cfa26ec65547/volumes" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.116556 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gpglg"] Jan 28 11:37:36 crc kubenswrapper[4804]: E0128 11:37:36.116848 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905df814-fa43-4ef1-b5e6-cfa26ec65547" containerName="registry-server" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.116863 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="905df814-fa43-4ef1-b5e6-cfa26ec65547" containerName="registry-server" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.117021 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="905df814-fa43-4ef1-b5e6-cfa26ec65547" containerName="registry-server" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.118060 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.132166 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gpglg"] Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.307568 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6ztx\" (UniqueName: \"kubernetes.io/projected/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-kube-api-access-n6ztx\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.307629 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-utilities\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.307651 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-catalog-content\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.408392 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6ztx\" (UniqueName: \"kubernetes.io/projected/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-kube-api-access-n6ztx\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.408666 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-utilities\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.408690 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-catalog-content\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.409178 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-catalog-content\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.409867 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-utilities\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.443825 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6ztx\" (UniqueName: \"kubernetes.io/projected/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-kube-api-access-n6ztx\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.451681 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.897774 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gpglg"] Jan 28 11:37:36 crc kubenswrapper[4804]: W0128 11:37:36.900126 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod743d1389_d1bf_4a3d_9dd2_c5e5cbb2373b.slice/crio-47b0623a468cf75667fac9efd23f7ec477ae60e09a8412eba6f15ac6094df6ff WatchSource:0}: Error finding container 47b0623a468cf75667fac9efd23f7ec477ae60e09a8412eba6f15ac6094df6ff: Status 404 returned error can't find the container with id 47b0623a468cf75667fac9efd23f7ec477ae60e09a8412eba6f15ac6094df6ff Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.992398 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpglg" event={"ID":"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b","Type":"ContainerStarted","Data":"47b0623a468cf75667fac9efd23f7ec477ae60e09a8412eba6f15ac6094df6ff"} Jan 28 11:37:38 crc kubenswrapper[4804]: I0128 11:37:38.001472 4804 generic.go:334] "Generic (PLEG): container finished" podID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerID="7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41" exitCode=0 Jan 28 11:37:38 crc kubenswrapper[4804]: I0128 11:37:38.001601 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpglg" event={"ID":"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b","Type":"ContainerDied","Data":"7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41"} Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.018387 4804 generic.go:334] "Generic (PLEG): container finished" podID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerID="f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646" exitCode=0 Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.018486 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpglg" event={"ID":"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b","Type":"ContainerDied","Data":"f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646"} Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.903950 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vx6td"] Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.905778 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.936713 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vx6td"] Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.981353 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-utilities\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.981472 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-catalog-content\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.981524 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gnj9\" (UniqueName: \"kubernetes.io/projected/a97c4398-9f91-4756-998e-ffd494da9163-kube-api-access-2gnj9\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.027337 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpglg" event={"ID":"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b","Type":"ContainerStarted","Data":"18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d"} Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.082975 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-catalog-content\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.083053 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gnj9\" (UniqueName: \"kubernetes.io/projected/a97c4398-9f91-4756-998e-ffd494da9163-kube-api-access-2gnj9\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.083118 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-utilities\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.083546 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-catalog-content\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.083570 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-utilities\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.102687 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gnj9\" (UniqueName: \"kubernetes.io/projected/a97c4398-9f91-4756-998e-ffd494da9163-kube-api-access-2gnj9\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.239910 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.804402 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gpglg" podStartSLOduration=3.076376749 podStartE2EDuration="5.804360268s" podCreationTimestamp="2026-01-28 11:37:36 +0000 UTC" firstStartedPulling="2026-01-28 11:37:38.003981342 +0000 UTC m=+933.798861336" lastFinishedPulling="2026-01-28 11:37:40.731964871 +0000 UTC m=+936.526844855" observedRunningTime="2026-01-28 11:37:41.048470304 +0000 UTC m=+936.843350288" watchObservedRunningTime="2026-01-28 11:37:41.804360268 +0000 UTC m=+937.599240252" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.806040 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vx6td"] Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.034280 4804 generic.go:334] "Generic (PLEG): container finished" podID="a97c4398-9f91-4756-998e-ffd494da9163" containerID="d24bdf38c2ae0d9aa7afcd4df2208cd063d6380b57d496ed6102a48ceb575f6d" exitCode=0 Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.034337 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx6td" event={"ID":"a97c4398-9f91-4756-998e-ffd494da9163","Type":"ContainerDied","Data":"d24bdf38c2ae0d9aa7afcd4df2208cd063d6380b57d496ed6102a48ceb575f6d"} Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.034410 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx6td" event={"ID":"a97c4398-9f91-4756-998e-ffd494da9163","Type":"ContainerStarted","Data":"e7c74f09e0faa8b24cdcfbf5befa9f8e319aac43fb5b70dd37852eec57f84da2"} Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.171370 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.171429 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.212432 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.582495 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.582608 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:37:43 crc kubenswrapper[4804]: I0128 11:37:43.067164 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:44 crc kubenswrapper[4804]: I0128 11:37:44.060043 4804 generic.go:334] "Generic (PLEG): container finished" podID="a97c4398-9f91-4756-998e-ffd494da9163" containerID="bb0a30930c53cbd838eba75e98440b18d18245af3b3e1d63a9c2fb93b9f87213" exitCode=0 Jan 28 11:37:44 crc kubenswrapper[4804]: I0128 11:37:44.060495 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx6td" event={"ID":"a97c4398-9f91-4756-998e-ffd494da9163","Type":"ContainerDied","Data":"bb0a30930c53cbd838eba75e98440b18d18245af3b3e1d63a9c2fb93b9f87213"} Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.931711 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s"] Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.933197 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.935089 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bjqwr" Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.954423 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s"] Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.966297 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-bundle\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.966427 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h9sm\" (UniqueName: \"kubernetes.io/projected/490a3033-f3bb-4a92-a03e-03ada6af8280-kube-api-access-2h9sm\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.966454 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-util\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.067824 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h9sm\" (UniqueName: \"kubernetes.io/projected/490a3033-f3bb-4a92-a03e-03ada6af8280-kube-api-access-2h9sm\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.067875 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-util\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.067989 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-bundle\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.068531 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-bundle\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.068567 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-util\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.091481 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx6td" event={"ID":"a97c4398-9f91-4756-998e-ffd494da9163","Type":"ContainerStarted","Data":"d2366c6c1c4339c2d0fe014cc4f4449d3a59c88fa9951a02f3ae19c398b19677"} Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.093345 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h9sm\" (UniqueName: \"kubernetes.io/projected/490a3033-f3bb-4a92-a03e-03ada6af8280-kube-api-access-2h9sm\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.113753 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vx6td" podStartSLOduration=3.251817885 podStartE2EDuration="6.113730127s" podCreationTimestamp="2026-01-28 11:37:40 +0000 UTC" firstStartedPulling="2026-01-28 11:37:42.035679028 +0000 UTC m=+937.830559012" lastFinishedPulling="2026-01-28 11:37:44.89759127 +0000 UTC m=+940.692471254" observedRunningTime="2026-01-28 11:37:46.107771715 +0000 UTC m=+941.902651709" watchObservedRunningTime="2026-01-28 11:37:46.113730127 +0000 UTC m=+941.908610111" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.248118 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.451843 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.451907 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.477696 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s"] Jan 28 11:37:46 crc kubenswrapper[4804]: W0128 11:37:46.490078 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod490a3033_f3bb_4a92_a03e_03ada6af8280.slice/crio-d0e01f91c5e24cd368a227381f9fc701e5a7edd94e1668950f51adc4749fca17 WatchSource:0}: Error finding container d0e01f91c5e24cd368a227381f9fc701e5a7edd94e1668950f51adc4749fca17: Status 404 returned error can't find the container with id d0e01f91c5e24cd368a227381f9fc701e5a7edd94e1668950f51adc4749fca17 Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.514688 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:47 crc kubenswrapper[4804]: I0128 11:37:47.101262 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" event={"ID":"490a3033-f3bb-4a92-a03e-03ada6af8280","Type":"ContainerStarted","Data":"d0e01f91c5e24cd368a227381f9fc701e5a7edd94e1668950f51adc4749fca17"} Jan 28 11:37:47 crc kubenswrapper[4804]: I0128 11:37:47.154660 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:48 crc kubenswrapper[4804]: I0128 11:37:48.110320 4804 generic.go:334] "Generic (PLEG): container finished" podID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerID="2ee4796032efcf54d68f75c9e1e04544636a0c96d591e322215de56136c937f0" exitCode=0 Jan 28 11:37:48 crc kubenswrapper[4804]: I0128 11:37:48.110368 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" event={"ID":"490a3033-f3bb-4a92-a03e-03ada6af8280","Type":"ContainerDied","Data":"2ee4796032efcf54d68f75c9e1e04544636a0c96d591e322215de56136c937f0"} Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.086094 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gpglg"] Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.117271 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gpglg" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="registry-server" containerID="cri-o://18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d" gracePeriod=2 Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.618047 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.720698 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6ztx\" (UniqueName: \"kubernetes.io/projected/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-kube-api-access-n6ztx\") pod \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.720811 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-catalog-content\") pod \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.720969 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-utilities\") pod \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.721688 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-utilities" (OuterVolumeSpecName: "utilities") pod "743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" (UID: "743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.726730 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-kube-api-access-n6ztx" (OuterVolumeSpecName: "kube-api-access-n6ztx") pod "743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" (UID: "743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b"). InnerVolumeSpecName "kube-api-access-n6ztx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.772131 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" (UID: "743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.822140 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.822185 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6ztx\" (UniqueName: \"kubernetes.io/projected/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-kube-api-access-n6ztx\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.822200 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.123974 4804 generic.go:334] "Generic (PLEG): container finished" podID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerID="eb4e45d9dabea51515670fa9925b8bbeff67a087ae29f57c10d07517c484e3bc" exitCode=0 Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.124064 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" event={"ID":"490a3033-f3bb-4a92-a03e-03ada6af8280","Type":"ContainerDied","Data":"eb4e45d9dabea51515670fa9925b8bbeff67a087ae29f57c10d07517c484e3bc"} Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.127771 4804 generic.go:334] "Generic (PLEG): container finished" podID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerID="18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d" exitCode=0 Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.127819 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.127828 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpglg" event={"ID":"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b","Type":"ContainerDied","Data":"18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d"} Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.127874 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpglg" event={"ID":"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b","Type":"ContainerDied","Data":"47b0623a468cf75667fac9efd23f7ec477ae60e09a8412eba6f15ac6094df6ff"} Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.127935 4804 scope.go:117] "RemoveContainer" containerID="18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.160145 4804 scope.go:117] "RemoveContainer" containerID="f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.163061 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gpglg"] Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.167679 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gpglg"] Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.190386 4804 scope.go:117] "RemoveContainer" containerID="7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.207685 4804 scope.go:117] "RemoveContainer" containerID="18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d" Jan 28 11:37:50 crc kubenswrapper[4804]: E0128 11:37:50.208159 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d\": container with ID starting with 18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d not found: ID does not exist" containerID="18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.208205 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d"} err="failed to get container status \"18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d\": rpc error: code = NotFound desc = could not find container \"18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d\": container with ID starting with 18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d not found: ID does not exist" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.208240 4804 scope.go:117] "RemoveContainer" containerID="f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646" Jan 28 11:37:50 crc kubenswrapper[4804]: E0128 11:37:50.208564 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646\": container with ID starting with f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646 not found: ID does not exist" containerID="f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.208602 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646"} err="failed to get container status \"f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646\": rpc error: code = NotFound desc = could not find container \"f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646\": container with ID starting with f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646 not found: ID does not exist" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.208623 4804 scope.go:117] "RemoveContainer" containerID="7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41" Jan 28 11:37:50 crc kubenswrapper[4804]: E0128 11:37:50.209079 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41\": container with ID starting with 7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41 not found: ID does not exist" containerID="7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.209103 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41"} err="failed to get container status \"7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41\": rpc error: code = NotFound desc = could not find container \"7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41\": container with ID starting with 7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41 not found: ID does not exist" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.922924 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" path="/var/lib/kubelet/pods/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b/volumes" Jan 28 11:37:51 crc kubenswrapper[4804]: I0128 11:37:51.135993 4804 generic.go:334] "Generic (PLEG): container finished" podID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerID="13fc07c706217dd316621b62486c27851a4bdc65246c2371fdaed612e1e6e287" exitCode=0 Jan 28 11:37:51 crc kubenswrapper[4804]: I0128 11:37:51.136104 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" event={"ID":"490a3033-f3bb-4a92-a03e-03ada6af8280","Type":"ContainerDied","Data":"13fc07c706217dd316621b62486c27851a4bdc65246c2371fdaed612e1e6e287"} Jan 28 11:37:51 crc kubenswrapper[4804]: I0128 11:37:51.240821 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:51 crc kubenswrapper[4804]: I0128 11:37:51.240948 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:51 crc kubenswrapper[4804]: I0128 11:37:51.282716 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.330605 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.447823 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.554602 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h9sm\" (UniqueName: \"kubernetes.io/projected/490a3033-f3bb-4a92-a03e-03ada6af8280-kube-api-access-2h9sm\") pod \"490a3033-f3bb-4a92-a03e-03ada6af8280\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.554684 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-util\") pod \"490a3033-f3bb-4a92-a03e-03ada6af8280\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.554751 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-bundle\") pod \"490a3033-f3bb-4a92-a03e-03ada6af8280\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.555809 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-bundle" (OuterVolumeSpecName: "bundle") pod "490a3033-f3bb-4a92-a03e-03ada6af8280" (UID: "490a3033-f3bb-4a92-a03e-03ada6af8280"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.562033 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/490a3033-f3bb-4a92-a03e-03ada6af8280-kube-api-access-2h9sm" (OuterVolumeSpecName: "kube-api-access-2h9sm") pod "490a3033-f3bb-4a92-a03e-03ada6af8280" (UID: "490a3033-f3bb-4a92-a03e-03ada6af8280"). InnerVolumeSpecName "kube-api-access-2h9sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.656581 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h9sm\" (UniqueName: \"kubernetes.io/projected/490a3033-f3bb-4a92-a03e-03ada6af8280-kube-api-access-2h9sm\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.656620 4804 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.996920 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-util" (OuterVolumeSpecName: "util") pod "490a3033-f3bb-4a92-a03e-03ada6af8280" (UID: "490a3033-f3bb-4a92-a03e-03ada6af8280"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:53 crc kubenswrapper[4804]: I0128 11:37:53.061672 4804 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-util\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:53 crc kubenswrapper[4804]: I0128 11:37:53.153288 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" event={"ID":"490a3033-f3bb-4a92-a03e-03ada6af8280","Type":"ContainerDied","Data":"d0e01f91c5e24cd368a227381f9fc701e5a7edd94e1668950f51adc4749fca17"} Jan 28 11:37:53 crc kubenswrapper[4804]: I0128 11:37:53.153328 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0e01f91c5e24cd368a227381f9fc701e5a7edd94e1668950f51adc4749fca17" Jan 28 11:37:53 crc kubenswrapper[4804]: I0128 11:37:53.153525 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:55 crc kubenswrapper[4804]: I0128 11:37:55.691319 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vx6td"] Jan 28 11:37:55 crc kubenswrapper[4804]: I0128 11:37:55.691902 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vx6td" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="registry-server" containerID="cri-o://d2366c6c1c4339c2d0fe014cc4f4449d3a59c88fa9951a02f3ae19c398b19677" gracePeriod=2 Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.172452 4804 generic.go:334] "Generic (PLEG): container finished" podID="a97c4398-9f91-4756-998e-ffd494da9163" containerID="d2366c6c1c4339c2d0fe014cc4f4449d3a59c88fa9951a02f3ae19c398b19677" exitCode=0 Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.172515 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx6td" event={"ID":"a97c4398-9f91-4756-998e-ffd494da9163","Type":"ContainerDied","Data":"d2366c6c1c4339c2d0fe014cc4f4449d3a59c88fa9951a02f3ae19c398b19677"} Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.604588 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.709430 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-utilities\") pod \"a97c4398-9f91-4756-998e-ffd494da9163\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.709493 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-catalog-content\") pod \"a97c4398-9f91-4756-998e-ffd494da9163\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.709590 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gnj9\" (UniqueName: \"kubernetes.io/projected/a97c4398-9f91-4756-998e-ffd494da9163-kube-api-access-2gnj9\") pod \"a97c4398-9f91-4756-998e-ffd494da9163\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.710468 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-utilities" (OuterVolumeSpecName: "utilities") pod "a97c4398-9f91-4756-998e-ffd494da9163" (UID: "a97c4398-9f91-4756-998e-ffd494da9163"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.716137 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a97c4398-9f91-4756-998e-ffd494da9163-kube-api-access-2gnj9" (OuterVolumeSpecName: "kube-api-access-2gnj9") pod "a97c4398-9f91-4756-998e-ffd494da9163" (UID: "a97c4398-9f91-4756-998e-ffd494da9163"). InnerVolumeSpecName "kube-api-access-2gnj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.758120 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a97c4398-9f91-4756-998e-ffd494da9163" (UID: "a97c4398-9f91-4756-998e-ffd494da9163"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.810761 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gnj9\" (UniqueName: \"kubernetes.io/projected/a97c4398-9f91-4756-998e-ffd494da9163-kube-api-access-2gnj9\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.810816 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.810829 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.180568 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx6td" event={"ID":"a97c4398-9f91-4756-998e-ffd494da9163","Type":"ContainerDied","Data":"e7c74f09e0faa8b24cdcfbf5befa9f8e319aac43fb5b70dd37852eec57f84da2"} Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.180915 4804 scope.go:117] "RemoveContainer" containerID="d2366c6c1c4339c2d0fe014cc4f4449d3a59c88fa9951a02f3ae19c398b19677" Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.180664 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.204681 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vx6td"] Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.207506 4804 scope.go:117] "RemoveContainer" containerID="bb0a30930c53cbd838eba75e98440b18d18245af3b3e1d63a9c2fb93b9f87213" Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.209584 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vx6td"] Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.225774 4804 scope.go:117] "RemoveContainer" containerID="d24bdf38c2ae0d9aa7afcd4df2208cd063d6380b57d496ed6102a48ceb575f6d" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058215 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9"] Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058479 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="extract-content" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058492 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="extract-content" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058508 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="registry-server" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058513 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="registry-server" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058523 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="extract-utilities" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058531 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="extract-utilities" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058542 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="pull" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058548 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="pull" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058559 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="extract-content" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058565 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="extract-content" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058571 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="extract" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058577 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="extract" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058584 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="registry-server" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058589 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="registry-server" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058597 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="util" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058603 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="util" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058614 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="extract-utilities" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058619 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="extract-utilities" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058719 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="registry-server" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058734 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="registry-server" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058746 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="extract" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.059127 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.062092 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-cn2xq" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.136587 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9"] Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.227708 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bzqz\" (UniqueName: \"kubernetes.io/projected/134135c7-1032-47aa-b0bd-361463826caf-kube-api-access-9bzqz\") pod \"openstack-operator-controller-init-cdb5b4f99-hxlm9\" (UID: \"134135c7-1032-47aa-b0bd-361463826caf\") " pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.329485 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bzqz\" (UniqueName: \"kubernetes.io/projected/134135c7-1032-47aa-b0bd-361463826caf-kube-api-access-9bzqz\") pod \"openstack-operator-controller-init-cdb5b4f99-hxlm9\" (UID: \"134135c7-1032-47aa-b0bd-361463826caf\") " pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.351294 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bzqz\" (UniqueName: \"kubernetes.io/projected/134135c7-1032-47aa-b0bd-361463826caf-kube-api-access-9bzqz\") pod \"openstack-operator-controller-init-cdb5b4f99-hxlm9\" (UID: \"134135c7-1032-47aa-b0bd-361463826caf\") " pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.375362 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.588076 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9"] Jan 28 11:37:58 crc kubenswrapper[4804]: W0128 11:37:58.592055 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod134135c7_1032_47aa_b0bd_361463826caf.slice/crio-05702102c283dc28b92c8e163b243830fe446e15cbd497515e054b71706be398 WatchSource:0}: Error finding container 05702102c283dc28b92c8e163b243830fe446e15cbd497515e054b71706be398: Status 404 returned error can't find the container with id 05702102c283dc28b92c8e163b243830fe446e15cbd497515e054b71706be398 Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.925084 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a97c4398-9f91-4756-998e-ffd494da9163" path="/var/lib/kubelet/pods/a97c4398-9f91-4756-998e-ffd494da9163/volumes" Jan 28 11:37:59 crc kubenswrapper[4804]: I0128 11:37:59.194906 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" event={"ID":"134135c7-1032-47aa-b0bd-361463826caf","Type":"ContainerStarted","Data":"05702102c283dc28b92c8e163b243830fe446e15cbd497515e054b71706be398"} Jan 28 11:38:04 crc kubenswrapper[4804]: I0128 11:38:04.236422 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" event={"ID":"134135c7-1032-47aa-b0bd-361463826caf","Type":"ContainerStarted","Data":"b1f85cb71fe4fbe86eaf50c3a44e67549139f598b8f0c430ab34a6812c0a577f"} Jan 28 11:38:04 crc kubenswrapper[4804]: I0128 11:38:04.236971 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:38:04 crc kubenswrapper[4804]: I0128 11:38:04.263820 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" podStartSLOduration=1.194391686 podStartE2EDuration="6.263803493s" podCreationTimestamp="2026-01-28 11:37:58 +0000 UTC" firstStartedPulling="2026-01-28 11:37:58.594237927 +0000 UTC m=+954.389117911" lastFinishedPulling="2026-01-28 11:38:03.663649734 +0000 UTC m=+959.458529718" observedRunningTime="2026-01-28 11:38:04.260117964 +0000 UTC m=+960.054997948" watchObservedRunningTime="2026-01-28 11:38:04.263803493 +0000 UTC m=+960.058683477" Jan 28 11:38:08 crc kubenswrapper[4804]: I0128 11:38:08.378929 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:38:12 crc kubenswrapper[4804]: I0128 11:38:12.582624 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:38:12 crc kubenswrapper[4804]: I0128 11:38:12.582696 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:38:12 crc kubenswrapper[4804]: I0128 11:38:12.582743 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:38:12 crc kubenswrapper[4804]: I0128 11:38:12.583362 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2d1117c737baf6cd27ef1229c3435bfc59febfb941c2b84b434e736df46abc8"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:38:12 crc kubenswrapper[4804]: I0128 11:38:12.583415 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://e2d1117c737baf6cd27ef1229c3435bfc59febfb941c2b84b434e736df46abc8" gracePeriod=600 Jan 28 11:38:13 crc kubenswrapper[4804]: I0128 11:38:13.294202 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="e2d1117c737baf6cd27ef1229c3435bfc59febfb941c2b84b434e736df46abc8" exitCode=0 Jan 28 11:38:13 crc kubenswrapper[4804]: I0128 11:38:13.294329 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"e2d1117c737baf6cd27ef1229c3435bfc59febfb941c2b84b434e736df46abc8"} Jan 28 11:38:13 crc kubenswrapper[4804]: I0128 11:38:13.294588 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"ed6af6b086af0e36078ceaad545a02650a81d6b24e2afd021938bf20fba0d1ad"} Jan 28 11:38:13 crc kubenswrapper[4804]: I0128 11:38:13.294619 4804 scope.go:117] "RemoveContainer" containerID="493f3a58ce9c84e61c12f35c4be8ff28af2862f186b7fdf44e3a4a848a20107b" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.351238 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.352603 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.355533 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wzwqx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.356085 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.357020 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.359910 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-tg24f" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.378462 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.379816 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.387471 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5kbq5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.396639 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.405151 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.416347 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.419939 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.426187 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-j2vj6" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.436057 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mchf8\" (UniqueName: \"kubernetes.io/projected/b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048-kube-api-access-mchf8\") pod \"designate-operator-controller-manager-6d9697b7f4-fbggh\" (UID: \"b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.436146 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq5xn\" (UniqueName: \"kubernetes.io/projected/c36b33fc-3ff6-4c44-a079-bc48a5a3d509-kube-api-access-mq5xn\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-vjb6d\" (UID: \"c36b33fc-3ff6-4c44-a079-bc48a5a3d509\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.436304 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjkt7\" (UniqueName: \"kubernetes.io/projected/db8796b2-e360-4287-9ba2-4ceda6de770e-kube-api-access-tjkt7\") pod \"cinder-operator-controller-manager-8d874c8fc-j5j86\" (UID: \"db8796b2-e360-4287-9ba2-4ceda6de770e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.454129 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.468030 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.468969 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.473544 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-d8n9n" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.481925 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.505730 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.518172 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.521346 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.524659 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-nhrg8" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.529689 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.538336 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjkt7\" (UniqueName: \"kubernetes.io/projected/db8796b2-e360-4287-9ba2-4ceda6de770e-kube-api-access-tjkt7\") pod \"cinder-operator-controller-manager-8d874c8fc-j5j86\" (UID: \"db8796b2-e360-4287-9ba2-4ceda6de770e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.538421 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mchf8\" (UniqueName: \"kubernetes.io/projected/b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048-kube-api-access-mchf8\") pod \"designate-operator-controller-manager-6d9697b7f4-fbggh\" (UID: \"b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.538468 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4vvw\" (UniqueName: \"kubernetes.io/projected/acdcc5e8-c284-444e-86c2-96aec766b35b-kube-api-access-l4vvw\") pod \"heat-operator-controller-manager-69d6db494d-hxv8b\" (UID: \"acdcc5e8-c284-444e-86c2-96aec766b35b\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.538491 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6xpm\" (UniqueName: \"kubernetes.io/projected/186e63a0-88e6-404b-963c-e5cb22485277-kube-api-access-f6xpm\") pod \"glance-operator-controller-manager-8886f4c47-qz2dl\" (UID: \"186e63a0-88e6-404b-963c-e5cb22485277\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.538515 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq5xn\" (UniqueName: \"kubernetes.io/projected/c36b33fc-3ff6-4c44-a079-bc48a5a3d509-kube-api-access-mq5xn\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-vjb6d\" (UID: \"c36b33fc-3ff6-4c44-a079-bc48a5a3d509\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.540589 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.541537 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.551771 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.552234 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7v747" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.559462 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.562018 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.563297 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.569479 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-mdsbd" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.583010 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjkt7\" (UniqueName: \"kubernetes.io/projected/db8796b2-e360-4287-9ba2-4ceda6de770e-kube-api-access-tjkt7\") pod \"cinder-operator-controller-manager-8d874c8fc-j5j86\" (UID: \"db8796b2-e360-4287-9ba2-4ceda6de770e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.583010 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mchf8\" (UniqueName: \"kubernetes.io/projected/b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048-kube-api-access-mchf8\") pod \"designate-operator-controller-manager-6d9697b7f4-fbggh\" (UID: \"b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.583034 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.584489 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.587608 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq5xn\" (UniqueName: \"kubernetes.io/projected/c36b33fc-3ff6-4c44-a079-bc48a5a3d509-kube-api-access-mq5xn\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-vjb6d\" (UID: \"c36b33fc-3ff6-4c44-a079-bc48a5a3d509\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.589160 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rn76n" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.616222 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.652143 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655090 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rzjn\" (UniqueName: \"kubernetes.io/projected/f75f08ff-7d3c-4fb4-a366-1c996771a71d-kube-api-access-2rzjn\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655142 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5mzv\" (UniqueName: \"kubernetes.io/projected/e770ba97-59e1-4752-8e93-bc7d53ff7c04-kube-api-access-d5mzv\") pod \"ironic-operator-controller-manager-5f4b8bd54d-k6rzx\" (UID: \"e770ba97-59e1-4752-8e93-bc7d53ff7c04\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655174 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hwg\" (UniqueName: \"kubernetes.io/projected/ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d-kube-api-access-94hwg\") pod \"horizon-operator-controller-manager-5fb775575f-fw9dq\" (UID: \"ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655207 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4vvw\" (UniqueName: \"kubernetes.io/projected/acdcc5e8-c284-444e-86c2-96aec766b35b-kube-api-access-l4vvw\") pod \"heat-operator-controller-manager-69d6db494d-hxv8b\" (UID: \"acdcc5e8-c284-444e-86c2-96aec766b35b\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655232 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6xpm\" (UniqueName: \"kubernetes.io/projected/186e63a0-88e6-404b-963c-e5cb22485277-kube-api-access-f6xpm\") pod \"glance-operator-controller-manager-8886f4c47-qz2dl\" (UID: \"186e63a0-88e6-404b-963c-e5cb22485277\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655256 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655314 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkd6m\" (UniqueName: \"kubernetes.io/projected/d5ce0c1e-3061-46ed-a816-3839144b160a-kube-api-access-pkd6m\") pod \"keystone-operator-controller-manager-84f48565d4-s92b7\" (UID: \"d5ce0c1e-3061-46ed-a816-3839144b160a\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.663305 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.667069 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.671439 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rcf4h" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.694613 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.727757 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.730332 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.733530 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.738852 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.740426 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.747347 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6xpm\" (UniqueName: \"kubernetes.io/projected/186e63a0-88e6-404b-963c-e5cb22485277-kube-api-access-f6xpm\") pod \"glance-operator-controller-manager-8886f4c47-qz2dl\" (UID: \"186e63a0-88e6-404b-963c-e5cb22485277\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.748192 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nvv4g" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.749396 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4vvw\" (UniqueName: \"kubernetes.io/projected/acdcc5e8-c284-444e-86c2-96aec766b35b-kube-api-access-l4vvw\") pod \"heat-operator-controller-manager-69d6db494d-hxv8b\" (UID: \"acdcc5e8-c284-444e-86c2-96aec766b35b\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.755381 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.756245 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rzjn\" (UniqueName: \"kubernetes.io/projected/f75f08ff-7d3c-4fb4-a366-1c996771a71d-kube-api-access-2rzjn\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.756273 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5mzv\" (UniqueName: \"kubernetes.io/projected/e770ba97-59e1-4752-8e93-bc7d53ff7c04-kube-api-access-d5mzv\") pod \"ironic-operator-controller-manager-5f4b8bd54d-k6rzx\" (UID: \"e770ba97-59e1-4752-8e93-bc7d53ff7c04\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.756301 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94hwg\" (UniqueName: \"kubernetes.io/projected/ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d-kube-api-access-94hwg\") pod \"horizon-operator-controller-manager-5fb775575f-fw9dq\" (UID: \"ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.756327 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.756368 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jwst\" (UniqueName: \"kubernetes.io/projected/ec1046a1-b834-40e4-b82a-923885428171-kube-api-access-7jwst\") pod \"manila-operator-controller-manager-7dd968899f-wl5w5\" (UID: \"ec1046a1-b834-40e4-b82a-923885428171\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.756401 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkd6m\" (UniqueName: \"kubernetes.io/projected/d5ce0c1e-3061-46ed-a816-3839144b160a-kube-api-access-pkd6m\") pod \"keystone-operator-controller-manager-84f48565d4-s92b7\" (UID: \"d5ce0c1e-3061-46ed-a816-3839144b160a\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:38:34 crc kubenswrapper[4804]: E0128 11:38:34.757906 4804 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:34 crc kubenswrapper[4804]: E0128 11:38:34.757975 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert podName:f75f08ff-7d3c-4fb4-a366-1c996771a71d nodeName:}" failed. No retries permitted until 2026-01-28 11:38:35.257959324 +0000 UTC m=+991.052839308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert") pod "infra-operator-controller-manager-79955696d6-wb5k2" (UID: "f75f08ff-7d3c-4fb4-a366-1c996771a71d") : secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.758532 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.776926 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rzjn\" (UniqueName: \"kubernetes.io/projected/f75f08ff-7d3c-4fb4-a366-1c996771a71d-kube-api-access-2rzjn\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.780820 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5mzv\" (UniqueName: \"kubernetes.io/projected/e770ba97-59e1-4752-8e93-bc7d53ff7c04-kube-api-access-d5mzv\") pod \"ironic-operator-controller-manager-5f4b8bd54d-k6rzx\" (UID: \"e770ba97-59e1-4752-8e93-bc7d53ff7c04\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.786528 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkd6m\" (UniqueName: \"kubernetes.io/projected/d5ce0c1e-3061-46ed-a816-3839144b160a-kube-api-access-pkd6m\") pod \"keystone-operator-controller-manager-84f48565d4-s92b7\" (UID: \"d5ce0c1e-3061-46ed-a816-3839144b160a\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.789409 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.790416 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.793001 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-wv4st" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.794954 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94hwg\" (UniqueName: \"kubernetes.io/projected/ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d-kube-api-access-94hwg\") pod \"horizon-operator-controller-manager-5fb775575f-fw9dq\" (UID: \"ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.797226 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.806443 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.807668 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.809748 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qmnkx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.824814 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.840168 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.850006 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.851307 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.852697 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.854570 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-z4l97" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.856929 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.857514 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jwst\" (UniqueName: \"kubernetes.io/projected/ec1046a1-b834-40e4-b82a-923885428171-kube-api-access-7jwst\") pod \"manila-operator-controller-manager-7dd968899f-wl5w5\" (UID: \"ec1046a1-b834-40e4-b82a-923885428171\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.857565 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7xrx\" (UniqueName: \"kubernetes.io/projected/07990c6c-3350-45a8-85de-1e0db97acb07-kube-api-access-g7xrx\") pod \"mariadb-operator-controller-manager-67bf948998-7dg9l\" (UID: \"07990c6c-3350-45a8-85de-1e0db97acb07\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.857621 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9rff\" (UniqueName: \"kubernetes.io/projected/b79b961c-583d-4e78-8513-c44ed292c325-kube-api-access-h9rff\") pod \"neutron-operator-controller-manager-585dbc889-n9kpn\" (UID: \"b79b961c-583d-4e78-8513-c44ed292c325\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.857653 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv2mr\" (UniqueName: \"kubernetes.io/projected/8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1-kube-api-access-sv2mr\") pod \"nova-operator-controller-manager-55bff696bd-dndv5\" (UID: \"8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.857807 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.898118 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-h57zg" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.906848 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jwst\" (UniqueName: \"kubernetes.io/projected/ec1046a1-b834-40e4-b82a-923885428171-kube-api-access-7jwst\") pod \"manila-operator-controller-manager-7dd968899f-wl5w5\" (UID: \"ec1046a1-b834-40e4-b82a-923885428171\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.912344 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.944479 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.947207 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.953192 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.958530 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-m8hd9" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.960077 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9rff\" (UniqueName: \"kubernetes.io/projected/b79b961c-583d-4e78-8513-c44ed292c325-kube-api-access-h9rff\") pod \"neutron-operator-controller-manager-585dbc889-n9kpn\" (UID: \"b79b961c-583d-4e78-8513-c44ed292c325\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.960118 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv2mr\" (UniqueName: \"kubernetes.io/projected/8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1-kube-api-access-sv2mr\") pod \"nova-operator-controller-manager-55bff696bd-dndv5\" (UID: \"8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.960167 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229rn\" (UniqueName: \"kubernetes.io/projected/7ab2436a-1b54-4c5e-bdc1-959026660c98-kube-api-access-229rn\") pod \"ovn-operator-controller-manager-788c46999f-4cpk5\" (UID: \"7ab2436a-1b54-4c5e-bdc1-959026660c98\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.960201 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wvn\" (UniqueName: \"kubernetes.io/projected/8f1a2428-c6c8-4113-9654-0c58ab91b45b-kube-api-access-f7wvn\") pod \"octavia-operator-controller-manager-6687f8d877-m5xng\" (UID: \"8f1a2428-c6c8-4113-9654-0c58ab91b45b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.960314 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7xrx\" (UniqueName: \"kubernetes.io/projected/07990c6c-3350-45a8-85de-1e0db97acb07-kube-api-access-g7xrx\") pod \"mariadb-operator-controller-manager-67bf948998-7dg9l\" (UID: \"07990c6c-3350-45a8-85de-1e0db97acb07\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.964508 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.976812 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.977599 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.980631 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.980740 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.982970 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7xrx\" (UniqueName: \"kubernetes.io/projected/07990c6c-3350-45a8-85de-1e0db97acb07-kube-api-access-g7xrx\") pod \"mariadb-operator-controller-manager-67bf948998-7dg9l\" (UID: \"07990c6c-3350-45a8-85de-1e0db97acb07\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.997660 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7vnf2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.998835 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9rff\" (UniqueName: \"kubernetes.io/projected/b79b961c-583d-4e78-8513-c44ed292c325-kube-api-access-h9rff\") pod \"neutron-operator-controller-manager-585dbc889-n9kpn\" (UID: \"b79b961c-583d-4e78-8513-c44ed292c325\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.020054 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.043620 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv2mr\" (UniqueName: \"kubernetes.io/projected/8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1-kube-api-access-sv2mr\") pod \"nova-operator-controller-manager-55bff696bd-dndv5\" (UID: \"8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.048201 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.057387 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.065900 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4hr5\" (UniqueName: \"kubernetes.io/projected/a26075bd-4d23-463a-abe8-575a02ebc9ad-kube-api-access-n4hr5\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.067836 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.067960 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dpb\" (UniqueName: \"kubernetes.io/projected/deece2f8-8c1c-4599-80f4-44e6ec055a18-kube-api-access-w9dpb\") pod \"placement-operator-controller-manager-5b964cf4cd-bfl45\" (UID: \"deece2f8-8c1c-4599-80f4-44e6ec055a18\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.068039 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229rn\" (UniqueName: \"kubernetes.io/projected/7ab2436a-1b54-4c5e-bdc1-959026660c98-kube-api-access-229rn\") pod \"ovn-operator-controller-manager-788c46999f-4cpk5\" (UID: \"7ab2436a-1b54-4c5e-bdc1-959026660c98\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.068086 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7wvn\" (UniqueName: \"kubernetes.io/projected/8f1a2428-c6c8-4113-9654-0c58ab91b45b-kube-api-access-f7wvn\") pod \"octavia-operator-controller-manager-6687f8d877-m5xng\" (UID: \"8f1a2428-c6c8-4113-9654-0c58ab91b45b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.074690 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-c9zbm" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.093907 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.119273 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.137704 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7wvn\" (UniqueName: \"kubernetes.io/projected/8f1a2428-c6c8-4113-9654-0c58ab91b45b-kube-api-access-f7wvn\") pod \"octavia-operator-controller-manager-6687f8d877-m5xng\" (UID: \"8f1a2428-c6c8-4113-9654-0c58ab91b45b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.138139 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.147170 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.151600 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229rn\" (UniqueName: \"kubernetes.io/projected/7ab2436a-1b54-4c5e-bdc1-959026660c98-kube-api-access-229rn\") pod \"ovn-operator-controller-manager-788c46999f-4cpk5\" (UID: \"7ab2436a-1b54-4c5e-bdc1-959026660c98\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.163168 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.164478 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.166784 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.170272 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-xrqjg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.171195 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4hr5\" (UniqueName: \"kubernetes.io/projected/a26075bd-4d23-463a-abe8-575a02ebc9ad-kube-api-access-n4hr5\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.171225 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.171253 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dpb\" (UniqueName: \"kubernetes.io/projected/deece2f8-8c1c-4599-80f4-44e6ec055a18-kube-api-access-w9dpb\") pod \"placement-operator-controller-manager-5b964cf4cd-bfl45\" (UID: \"deece2f8-8c1c-4599-80f4-44e6ec055a18\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.171316 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9dcn\" (UniqueName: \"kubernetes.io/projected/eb1c01a9-6548-49cd-8e1f-4f01daaff754-kube-api-access-n9dcn\") pod \"swift-operator-controller-manager-68fc8c869-fwd68\" (UID: \"eb1c01a9-6548-49cd-8e1f-4f01daaff754\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.171633 4804 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.171676 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert podName:a26075bd-4d23-463a-abe8-575a02ebc9ad nodeName:}" failed. No retries permitted until 2026-01-28 11:38:35.671658962 +0000 UTC m=+991.466538946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" (UID: "a26075bd-4d23-463a-abe8-575a02ebc9ad") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.193802 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.221439 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.223936 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4hr5\" (UniqueName: \"kubernetes.io/projected/a26075bd-4d23-463a-abe8-575a02ebc9ad-kube-api-access-n4hr5\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.241315 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dpb\" (UniqueName: \"kubernetes.io/projected/deece2f8-8c1c-4599-80f4-44e6ec055a18-kube-api-access-w9dpb\") pod \"placement-operator-controller-manager-5b964cf4cd-bfl45\" (UID: \"deece2f8-8c1c-4599-80f4-44e6ec055a18\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.280954 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.281929 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2rnj\" (UniqueName: \"kubernetes.io/projected/23a10136-5079-4838-adf9-6512ccfd5f2c-kube-api-access-m2rnj\") pod \"telemetry-operator-controller-manager-64b5b76f97-2hdgj\" (UID: \"23a10136-5079-4838-adf9-6512ccfd5f2c\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.281976 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9dcn\" (UniqueName: \"kubernetes.io/projected/eb1c01a9-6548-49cd-8e1f-4f01daaff754-kube-api-access-n9dcn\") pod \"swift-operator-controller-manager-68fc8c869-fwd68\" (UID: \"eb1c01a9-6548-49cd-8e1f-4f01daaff754\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.282006 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.282112 4804 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.282155 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert podName:f75f08ff-7d3c-4fb4-a366-1c996771a71d nodeName:}" failed. No retries permitted until 2026-01-28 11:38:36.282140079 +0000 UTC m=+992.077020063 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert") pod "infra-operator-controller-manager-79955696d6-wb5k2" (UID: "f75f08ff-7d3c-4fb4-a366-1c996771a71d") : secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.289242 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.327961 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.332785 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.357561 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.363785 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-wzzgl" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.389125 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2k6d\" (UniqueName: \"kubernetes.io/projected/ff35634f-2b61-44e4-934a-74b39c5b7335-kube-api-access-z2k6d\") pod \"test-operator-controller-manager-56f8bfcd9f-9vgvb\" (UID: \"ff35634f-2b61-44e4-934a-74b39c5b7335\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.389279 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2rnj\" (UniqueName: \"kubernetes.io/projected/23a10136-5079-4838-adf9-6512ccfd5f2c-kube-api-access-m2rnj\") pod \"telemetry-operator-controller-manager-64b5b76f97-2hdgj\" (UID: \"23a10136-5079-4838-adf9-6512ccfd5f2c\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.402115 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.425063 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9dcn\" (UniqueName: \"kubernetes.io/projected/eb1c01a9-6548-49cd-8e1f-4f01daaff754-kube-api-access-n9dcn\") pod \"swift-operator-controller-manager-68fc8c869-fwd68\" (UID: \"eb1c01a9-6548-49cd-8e1f-4f01daaff754\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.428098 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2rnj\" (UniqueName: \"kubernetes.io/projected/23a10136-5079-4838-adf9-6512ccfd5f2c-kube-api-access-m2rnj\") pod \"telemetry-operator-controller-manager-64b5b76f97-2hdgj\" (UID: \"23a10136-5079-4838-adf9-6512ccfd5f2c\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.445899 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-659wf"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.448158 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.450228 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-pxw8f" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.451362 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-659wf"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.472214 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.499643 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2k6d\" (UniqueName: \"kubernetes.io/projected/ff35634f-2b61-44e4-934a-74b39c5b7335-kube-api-access-z2k6d\") pod \"test-operator-controller-manager-56f8bfcd9f-9vgvb\" (UID: \"ff35634f-2b61-44e4-934a-74b39c5b7335\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.499776 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x74dd\" (UniqueName: \"kubernetes.io/projected/67fbb1e9-d718-4075-971a-33a245c498e3-kube-api-access-x74dd\") pod \"watcher-operator-controller-manager-564965969-659wf\" (UID: \"67fbb1e9-d718-4075-971a-33a245c498e3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.514939 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.517084 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.521851 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.522772 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.523293 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lnb4p" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.532359 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.547819 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2k6d\" (UniqueName: \"kubernetes.io/projected/ff35634f-2b61-44e4-934a-74b39c5b7335-kube-api-access-z2k6d\") pod \"test-operator-controller-manager-56f8bfcd9f-9vgvb\" (UID: \"ff35634f-2b61-44e4-934a-74b39c5b7335\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.547906 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.548934 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.558488 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-784h5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.568518 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.601485 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.601539 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xskb9\" (UniqueName: \"kubernetes.io/projected/69938639-9ff0-433c-bd73-8d129935e7d4-kube-api-access-xskb9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cqlch\" (UID: \"69938639-9ff0-433c-bd73-8d129935e7d4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.601586 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x74dd\" (UniqueName: \"kubernetes.io/projected/67fbb1e9-d718-4075-971a-33a245c498e3-kube-api-access-x74dd\") pod \"watcher-operator-controller-manager-564965969-659wf\" (UID: \"67fbb1e9-d718-4075-971a-33a245c498e3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.601618 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.601695 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48nzc\" (UniqueName: \"kubernetes.io/projected/58f748c2-ceb6-4d34-8a2e-8227e59ef560-kube-api-access-48nzc\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.623084 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x74dd\" (UniqueName: \"kubernetes.io/projected/67fbb1e9-d718-4075-971a-33a245c498e3-kube-api-access-x74dd\") pod \"watcher-operator-controller-manager-564965969-659wf\" (UID: \"67fbb1e9-d718-4075-971a-33a245c498e3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.627595 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.704328 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.704427 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48nzc\" (UniqueName: \"kubernetes.io/projected/58f748c2-ceb6-4d34-8a2e-8227e59ef560-kube-api-access-48nzc\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.704558 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.704593 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.704651 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xskb9\" (UniqueName: \"kubernetes.io/projected/69938639-9ff0-433c-bd73-8d129935e7d4-kube-api-access-xskb9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cqlch\" (UID: \"69938639-9ff0-433c-bd73-8d129935e7d4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.705196 4804 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.705246 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert podName:a26075bd-4d23-463a-abe8-575a02ebc9ad nodeName:}" failed. No retries permitted until 2026-01-28 11:38:36.705231927 +0000 UTC m=+992.500111911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" (UID: "a26075bd-4d23-463a-abe8-575a02ebc9ad") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.705526 4804 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.705556 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:36.205544447 +0000 UTC m=+992.000424431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "metrics-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.706261 4804 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.706347 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:36.206324891 +0000 UTC m=+992.001204875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.736944 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xskb9\" (UniqueName: \"kubernetes.io/projected/69938639-9ff0-433c-bd73-8d129935e7d4-kube-api-access-xskb9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cqlch\" (UID: \"69938639-9ff0-433c-bd73-8d129935e7d4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.739264 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48nzc\" (UniqueName: \"kubernetes.io/projected/58f748c2-ceb6-4d34-8a2e-8227e59ef560-kube-api-access-48nzc\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.768002 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.772057 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.787393 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.862711 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.898680 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.946953 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.954921 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.988817 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx"] Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.049437 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod186e63a0_88e6_404b_963c_e5cb22485277.slice/crio-9f377a0a09fb2c27c43540ac51e0bc4a180b1a7f90d7106f1a51eccecc44055f WatchSource:0}: Error finding container 9f377a0a09fb2c27c43540ac51e0bc4a180b1a7f90d7106f1a51eccecc44055f: Status 404 returned error can't find the container with id 9f377a0a09fb2c27c43540ac51e0bc4a180b1a7f90d7106f1a51eccecc44055f Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.220898 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.221304 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.221442 4804 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.221489 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:37.221474889 +0000 UTC m=+993.016354873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "webhook-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.221529 4804 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.221547 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:37.221541641 +0000 UTC m=+993.016421625 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "metrics-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.322226 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.322416 4804 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.322493 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert podName:f75f08ff-7d3c-4fb4-a366-1c996771a71d nodeName:}" failed. No retries permitted until 2026-01-28 11:38:38.322474004 +0000 UTC m=+994.117353988 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert") pod "infra-operator-controller-manager-79955696d6-wb5k2" (UID: "f75f08ff-7d3c-4fb4-a366-1c996771a71d") : secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.357190 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.361723 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.386716 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.425010 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5"] Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.432049 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c7ff5ff_8c23_46f4_9ba6_dda63fa9cce1.slice/crio-261bc620856907d9e6b5aa5a74d02d6679e8ce3788780205eec4e6669d509011 WatchSource:0}: Error finding container 261bc620856907d9e6b5aa5a74d02d6679e8ce3788780205eec4e6669d509011: Status 404 returned error can't find the container with id 261bc620856907d9e6b5aa5a74d02d6679e8ce3788780205eec4e6669d509011 Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.433338 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5"] Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.434460 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f1a2428_c6c8_4113_9654_0c58ab91b45b.slice/crio-93091301d88d6e08bfd7e616a5576061ebf5786169b510dabb1eabd6baf63300 WatchSource:0}: Error finding container 93091301d88d6e08bfd7e616a5576061ebf5786169b510dabb1eabd6baf63300: Status 404 returned error can't find the container with id 93091301d88d6e08bfd7e616a5576061ebf5786169b510dabb1eabd6baf63300 Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.442553 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.447988 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.504496 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" event={"ID":"acdcc5e8-c284-444e-86c2-96aec766b35b","Type":"ContainerStarted","Data":"4eafef24c53cb67d10543ff1aed77ed6e30fb1e8ae75e6602e351f4588afefaf"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.508629 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" event={"ID":"ec1046a1-b834-40e4-b82a-923885428171","Type":"ContainerStarted","Data":"b53af7c7855cf739fadf4e6e2c6df12f485fca7792fb09d3178884d186293256"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.510038 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" event={"ID":"07990c6c-3350-45a8-85de-1e0db97acb07","Type":"ContainerStarted","Data":"b8663b9dbf27c61ab81a3d421ac11cf9e2478b68ba8cfe7af7369e8526e5d63a"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.511260 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" event={"ID":"8f1a2428-c6c8-4113-9654-0c58ab91b45b","Type":"ContainerStarted","Data":"93091301d88d6e08bfd7e616a5576061ebf5786169b510dabb1eabd6baf63300"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.512265 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" event={"ID":"db8796b2-e360-4287-9ba2-4ceda6de770e","Type":"ContainerStarted","Data":"895a3f5511952bbf7089d97085752bf56272c5bf27b267fd5d34d12d5f3df970"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.513711 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" event={"ID":"e770ba97-59e1-4752-8e93-bc7d53ff7c04","Type":"ContainerStarted","Data":"48982f5d0ff8a2b65899f3157144028f8be0420b71da1d2c1c5066be864990c4"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.514904 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" event={"ID":"186e63a0-88e6-404b-963c-e5cb22485277","Type":"ContainerStarted","Data":"9f377a0a09fb2c27c43540ac51e0bc4a180b1a7f90d7106f1a51eccecc44055f"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.515848 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" event={"ID":"c36b33fc-3ff6-4c44-a079-bc48a5a3d509","Type":"ContainerStarted","Data":"83f3ec579dfec43c7d39c2d9940410471f47310656fde55a514e880646f9de7a"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.516722 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" event={"ID":"ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d","Type":"ContainerStarted","Data":"8b937b2020729521500c8535098dfb65146900ac89c091e16ad7f17032b2e0ab"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.517840 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" event={"ID":"b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048","Type":"ContainerStarted","Data":"3483515f63ecf7c100b081e893ac4dcd41aaad186e12fb0d37fa0b574fa783f7"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.519166 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" event={"ID":"d5ce0c1e-3061-46ed-a816-3839144b160a","Type":"ContainerStarted","Data":"8bb892969bb182a9eaf3e5a225dd66b12d6d7f19b92fc93377f5eaf54ba5460e"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.519988 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" event={"ID":"8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1","Type":"ContainerStarted","Data":"261bc620856907d9e6b5aa5a74d02d6679e8ce3788780205eec4e6669d509011"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.729780 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.730104 4804 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.730217 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert podName:a26075bd-4d23-463a-abe8-575a02ebc9ad nodeName:}" failed. No retries permitted until 2026-01-28 11:38:38.730186122 +0000 UTC m=+994.525066106 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" (UID: "a26075bd-4d23-463a-abe8-575a02ebc9ad") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.790795 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.809918 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.815992 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb"] Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.823522 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff35634f_2b61_44e4_934a_74b39c5b7335.slice/crio-99493dacdb37721eecccf5cfc1bd1bd74e8e4cfcee376e0c05b61cb7913672dc WatchSource:0}: Error finding container 99493dacdb37721eecccf5cfc1bd1bd74e8e4cfcee376e0c05b61cb7913672dc: Status 404 returned error can't find the container with id 99493dacdb37721eecccf5cfc1bd1bd74e8e4cfcee376e0c05b61cb7913672dc Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.827475 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.834347 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-659wf"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.839337 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5"] Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.839347 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z2k6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-9vgvb_openstack-operators(ff35634f-2b61-44e4-934a-74b39c5b7335): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.841077 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" podUID="ff35634f-2b61-44e4-934a-74b39c5b7335" Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.844942 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch"] Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.847274 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67fbb1e9_d718_4075_971a_33a245c498e3.slice/crio-d31e572e58c22e888e1498dd489255278c3edaaf22990b079ab700fe74359cb1 WatchSource:0}: Error finding container d31e572e58c22e888e1498dd489255278c3edaaf22990b079ab700fe74359cb1: Status 404 returned error can't find the container with id d31e572e58c22e888e1498dd489255278c3edaaf22990b079ab700fe74359cb1 Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.850062 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn"] Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.850659 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69938639_9ff0_433c_bd73_8d129935e7d4.slice/crio-8457e202d081129357c0a4ea6a3036fb2674cfc6085e2095e13253a6b11561fd WatchSource:0}: Error finding container 8457e202d081129357c0a4ea6a3036fb2674cfc6085e2095e13253a6b11561fd: Status 404 returned error can't find the container with id 8457e202d081129357c0a4ea6a3036fb2674cfc6085e2095e13253a6b11561fd Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.852468 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x74dd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-659wf_openstack-operators(67fbb1e9-d718-4075-971a-33a245c498e3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.854598 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" podUID="67fbb1e9-d718-4075-971a-33a245c498e3" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.854722 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xskb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-cqlch_openstack-operators(69938639-9ff0-433c-bd73-8d129935e7d4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.856358 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" podUID="69938639-9ff0-433c-bd73-8d129935e7d4" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.859551 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9dcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-fwd68_openstack-operators(eb1c01a9-6548-49cd-8e1f-4f01daaff754): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.859891 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ab2436a_1b54_4c5e_bdc1_959026660c98.slice/crio-380b9407abe431c50c621dbca9001fd1bd11837927a8e18809e817990bdcc8d1 WatchSource:0}: Error finding container 380b9407abe431c50c621dbca9001fd1bd11837927a8e18809e817990bdcc8d1: Status 404 returned error can't find the container with id 380b9407abe431c50c621dbca9001fd1bd11837927a8e18809e817990bdcc8d1 Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.860900 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" podUID="eb1c01a9-6548-49cd-8e1f-4f01daaff754" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.862786 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-229rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-4cpk5_openstack-operators(7ab2436a-1b54-4c5e-bdc1-959026660c98): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.863942 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" podUID="7ab2436a-1b54-4c5e-bdc1-959026660c98" Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.865552 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb79b961c_583d_4e78_8513_c44ed292c325.slice/crio-7a48f07032ac38185455f7a3181866ec5285d8c1b7e98e27a7426d548368590b WatchSource:0}: Error finding container 7a48f07032ac38185455f7a3181866ec5285d8c1b7e98e27a7426d548368590b: Status 404 returned error can't find the container with id 7a48f07032ac38185455f7a3181866ec5285d8c1b7e98e27a7426d548368590b Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.869999 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h9rff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-n9kpn_openstack-operators(b79b961c-583d-4e78-8513-c44ed292c325): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.871311 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" podUID="b79b961c-583d-4e78-8513-c44ed292c325" Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.240897 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.241005 4804 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.241055 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.241073 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:39.241056133 +0000 UTC m=+995.035936117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "metrics-server-cert" not found Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.241275 4804 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.241348 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:39.241325782 +0000 UTC m=+995.036205846 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "webhook-server-cert" not found Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.529268 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" event={"ID":"67fbb1e9-d718-4075-971a-33a245c498e3","Type":"ContainerStarted","Data":"d31e572e58c22e888e1498dd489255278c3edaaf22990b079ab700fe74359cb1"} Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.533466 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" event={"ID":"7ab2436a-1b54-4c5e-bdc1-959026660c98","Type":"ContainerStarted","Data":"380b9407abe431c50c621dbca9001fd1bd11837927a8e18809e817990bdcc8d1"} Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.536139 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" event={"ID":"deece2f8-8c1c-4599-80f4-44e6ec055a18","Type":"ContainerStarted","Data":"c7b16bd5b2eb9279a9e0a4bb6602854b6130872d7e8fc5584f44758f9d427b54"} Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.537272 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" podUID="7ab2436a-1b54-4c5e-bdc1-959026660c98" Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.537550 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" event={"ID":"23a10136-5079-4838-adf9-6512ccfd5f2c","Type":"ContainerStarted","Data":"5ad8a54ec0f8f56f389eb1896ff7edb2cdef873286746d7837759852648f582c"} Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.539128 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" podUID="67fbb1e9-d718-4075-971a-33a245c498e3" Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.543280 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" event={"ID":"ff35634f-2b61-44e4-934a-74b39c5b7335","Type":"ContainerStarted","Data":"99493dacdb37721eecccf5cfc1bd1bd74e8e4cfcee376e0c05b61cb7913672dc"} Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.545246 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" podUID="ff35634f-2b61-44e4-934a-74b39c5b7335" Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.551697 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" event={"ID":"b79b961c-583d-4e78-8513-c44ed292c325","Type":"ContainerStarted","Data":"7a48f07032ac38185455f7a3181866ec5285d8c1b7e98e27a7426d548368590b"} Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.552707 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" event={"ID":"69938639-9ff0-433c-bd73-8d129935e7d4","Type":"ContainerStarted","Data":"8457e202d081129357c0a4ea6a3036fb2674cfc6085e2095e13253a6b11561fd"} Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.554531 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" podUID="b79b961c-583d-4e78-8513-c44ed292c325" Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.555104 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" podUID="69938639-9ff0-433c-bd73-8d129935e7d4" Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.555480 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" event={"ID":"eb1c01a9-6548-49cd-8e1f-4f01daaff754","Type":"ContainerStarted","Data":"6cef4fd47e6f9491fe048baec30457f92241ea7f58b078ee3ec97beea794a7cd"} Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.557756 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" podUID="eb1c01a9-6548-49cd-8e1f-4f01daaff754" Jan 28 11:38:38 crc kubenswrapper[4804]: I0128 11:38:38.364494 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.364657 4804 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.364729 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert podName:f75f08ff-7d3c-4fb4-a366-1c996771a71d nodeName:}" failed. No retries permitted until 2026-01-28 11:38:42.36469294 +0000 UTC m=+998.159572924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert") pod "infra-operator-controller-manager-79955696d6-wb5k2" (UID: "f75f08ff-7d3c-4fb4-a366-1c996771a71d") : secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.566426 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" podUID="67fbb1e9-d718-4075-971a-33a245c498e3" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.566548 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" podUID="eb1c01a9-6548-49cd-8e1f-4f01daaff754" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.566660 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" podUID="69938639-9ff0-433c-bd73-8d129935e7d4" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.566699 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" podUID="b79b961c-583d-4e78-8513-c44ed292c325" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.571219 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" podUID="7ab2436a-1b54-4c5e-bdc1-959026660c98" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.571644 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" podUID="ff35634f-2b61-44e4-934a-74b39c5b7335" Jan 28 11:38:38 crc kubenswrapper[4804]: I0128 11:38:38.777759 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.777972 4804 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.778061 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert podName:a26075bd-4d23-463a-abe8-575a02ebc9ad nodeName:}" failed. No retries permitted until 2026-01-28 11:38:42.778038227 +0000 UTC m=+998.572918241 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" (UID: "a26075bd-4d23-463a-abe8-575a02ebc9ad") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:39 crc kubenswrapper[4804]: I0128 11:38:39.285747 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:39 crc kubenswrapper[4804]: I0128 11:38:39.285930 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:39 crc kubenswrapper[4804]: E0128 11:38:39.286000 4804 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 11:38:39 crc kubenswrapper[4804]: E0128 11:38:39.286118 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:43.2860916 +0000 UTC m=+999.080971584 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "webhook-server-cert" not found Jan 28 11:38:39 crc kubenswrapper[4804]: E0128 11:38:39.286174 4804 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 11:38:39 crc kubenswrapper[4804]: E0128 11:38:39.286268 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:43.286245415 +0000 UTC m=+999.081125389 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "metrics-server-cert" not found Jan 28 11:38:42 crc kubenswrapper[4804]: I0128 11:38:42.440795 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:42 crc kubenswrapper[4804]: E0128 11:38:42.441016 4804 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:42 crc kubenswrapper[4804]: E0128 11:38:42.441544 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert podName:f75f08ff-7d3c-4fb4-a366-1c996771a71d nodeName:}" failed. No retries permitted until 2026-01-28 11:38:50.44151774 +0000 UTC m=+1006.236397724 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert") pod "infra-operator-controller-manager-79955696d6-wb5k2" (UID: "f75f08ff-7d3c-4fb4-a366-1c996771a71d") : secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:42 crc kubenswrapper[4804]: I0128 11:38:42.848107 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:42 crc kubenswrapper[4804]: E0128 11:38:42.848382 4804 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:42 crc kubenswrapper[4804]: E0128 11:38:42.848524 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert podName:a26075bd-4d23-463a-abe8-575a02ebc9ad nodeName:}" failed. No retries permitted until 2026-01-28 11:38:50.848490484 +0000 UTC m=+1006.643370678 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" (UID: "a26075bd-4d23-463a-abe8-575a02ebc9ad") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:43 crc kubenswrapper[4804]: I0128 11:38:43.356685 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:43 crc kubenswrapper[4804]: I0128 11:38:43.356796 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:43 crc kubenswrapper[4804]: E0128 11:38:43.357001 4804 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 11:38:43 crc kubenswrapper[4804]: E0128 11:38:43.357042 4804 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 11:38:43 crc kubenswrapper[4804]: E0128 11:38:43.357088 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:51.357062713 +0000 UTC m=+1007.151942707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "webhook-server-cert" not found Jan 28 11:38:43 crc kubenswrapper[4804]: E0128 11:38:43.357181 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:51.357162116 +0000 UTC m=+1007.152042090 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "metrics-server-cert" not found Jan 28 11:38:49 crc kubenswrapper[4804]: E0128 11:38:49.827154 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 28 11:38:49 crc kubenswrapper[4804]: E0128 11:38:49.827906 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sv2mr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-dndv5_openstack-operators(8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 11:38:49 crc kubenswrapper[4804]: E0128 11:38:49.829087 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" podUID="8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.485180 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:50 crc kubenswrapper[4804]: E0128 11:38:50.485384 4804 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:50 crc kubenswrapper[4804]: E0128 11:38:50.485624 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert podName:f75f08ff-7d3c-4fb4-a366-1c996771a71d nodeName:}" failed. No retries permitted until 2026-01-28 11:39:06.485607591 +0000 UTC m=+1022.280487575 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert") pod "infra-operator-controller-manager-79955696d6-wb5k2" (UID: "f75f08ff-7d3c-4fb4-a366-1c996771a71d") : secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.665805 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" event={"ID":"db8796b2-e360-4287-9ba2-4ceda6de770e","Type":"ContainerStarted","Data":"e29209b99c46fdb9842e3b0f93efabe2f0301e4d7d3564b77a8dfaca95b6bd32"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.675987 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" event={"ID":"c36b33fc-3ff6-4c44-a079-bc48a5a3d509","Type":"ContainerStarted","Data":"ddc53a5f04a33046ae63d900b98b8c4a6bcde97c259d081d7ca78426d74e7f2a"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.676126 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.680047 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" event={"ID":"23a10136-5079-4838-adf9-6512ccfd5f2c","Type":"ContainerStarted","Data":"7a8189a2d971a4efb202aa5b8be634bba0414514d6edc739073a662c8e2cfad9"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.680907 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.686276 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" event={"ID":"ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d","Type":"ContainerStarted","Data":"12072f1190dc94a8a9ed29811d2778d960fdc92e7fc25c10465662c4806c1e0b"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.686500 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.696205 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" event={"ID":"b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048","Type":"ContainerStarted","Data":"71ef32a95976cf58c1d83a22901913052564a7f598676e3e05aa2735a7d3b782"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.696305 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.698595 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" podStartSLOduration=2.830931078 podStartE2EDuration="16.69856843s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.020422949 +0000 UTC m=+991.815302933" lastFinishedPulling="2026-01-28 11:38:49.888060311 +0000 UTC m=+1005.682940285" observedRunningTime="2026-01-28 11:38:50.69162592 +0000 UTC m=+1006.486505924" watchObservedRunningTime="2026-01-28 11:38:50.69856843 +0000 UTC m=+1006.493448414" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.700478 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" event={"ID":"8f1a2428-c6c8-4113-9654-0c58ab91b45b","Type":"ContainerStarted","Data":"2a25bd14770dea745e830d6e517cce22d83f4c6838cec4ba671daf87e7fc27c0"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.700724 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.710148 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" event={"ID":"d5ce0c1e-3061-46ed-a816-3839144b160a","Type":"ContainerStarted","Data":"7add3c1bab2d6c06b9fffd4f9d703efc50adef9e203e7bbd14f785f21739c1c8"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.710465 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.712691 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" event={"ID":"ec1046a1-b834-40e4-b82a-923885428171","Type":"ContainerStarted","Data":"9c283fa1a070758c836bc59fd9aea2ed6bec7718c11fec1ef7827496d8f3a1fe"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.713359 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.723759 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" event={"ID":"e770ba97-59e1-4752-8e93-bc7d53ff7c04","Type":"ContainerStarted","Data":"13586ab275c816bc340826486c2c61b3d2058cfb2fd0b170bc80c87f02684f8d"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.723815 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.729827 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" podStartSLOduration=2.825952971 podStartE2EDuration="16.729802934s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:35.921426119 +0000 UTC m=+991.716306103" lastFinishedPulling="2026-01-28 11:38:49.825276072 +0000 UTC m=+1005.620156066" observedRunningTime="2026-01-28 11:38:50.721278273 +0000 UTC m=+1006.516158257" watchObservedRunningTime="2026-01-28 11:38:50.729802934 +0000 UTC m=+1006.524682918" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.733530 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" event={"ID":"07990c6c-3350-45a8-85de-1e0db97acb07","Type":"ContainerStarted","Data":"aebe483f195968df63cb2423fa69db1cc0a49b2dc39619a3ae3262b64d8c7e2d"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.734259 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.758435 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" event={"ID":"186e63a0-88e6-404b-963c-e5cb22485277","Type":"ContainerStarted","Data":"470d98258dce0f4a9e032399bf76f152470ae22c9d51354c9f6ba54ec0d61a6d"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.759191 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.765164 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" event={"ID":"deece2f8-8c1c-4599-80f4-44e6ec055a18","Type":"ContainerStarted","Data":"b30b7fe392e3b992f519ea4c849f88ee6b8911536434017e14be392b80c40558"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.765465 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.766181 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" podStartSLOduration=2.930925652 podStartE2EDuration="16.761865866s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.065398761 +0000 UTC m=+991.860278745" lastFinishedPulling="2026-01-28 11:38:49.896338975 +0000 UTC m=+1005.691218959" observedRunningTime="2026-01-28 11:38:50.759396846 +0000 UTC m=+1006.554276830" watchObservedRunningTime="2026-01-28 11:38:50.761865866 +0000 UTC m=+1006.556745850" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.768457 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" event={"ID":"acdcc5e8-c284-444e-86c2-96aec766b35b","Type":"ContainerStarted","Data":"718f2fd22e8e5601797827fd652a9a8efaca90bdeb4ae8c14dc787e065f418b9"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.768487 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:38:50 crc kubenswrapper[4804]: E0128 11:38:50.775402 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" podUID="8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.783566 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" podStartSLOduration=3.716175167 podStartE2EDuration="16.783544035s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.819159334 +0000 UTC m=+992.614039318" lastFinishedPulling="2026-01-28 11:38:49.886528192 +0000 UTC m=+1005.681408186" observedRunningTime="2026-01-28 11:38:50.782334296 +0000 UTC m=+1006.577214290" watchObservedRunningTime="2026-01-28 11:38:50.783544035 +0000 UTC m=+1006.578424019" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.829356 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" podStartSLOduration=3.030721498 podStartE2EDuration="16.829339582s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.065310758 +0000 UTC m=+991.860190742" lastFinishedPulling="2026-01-28 11:38:49.863928842 +0000 UTC m=+1005.658808826" observedRunningTime="2026-01-28 11:38:50.821850254 +0000 UTC m=+1006.616730238" watchObservedRunningTime="2026-01-28 11:38:50.829339582 +0000 UTC m=+1006.624219566" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.868753 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" podStartSLOduration=3.3488241739999998 podStartE2EDuration="16.868734786s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.366645721 +0000 UTC m=+992.161525705" lastFinishedPulling="2026-01-28 11:38:49.886556323 +0000 UTC m=+1005.681436317" observedRunningTime="2026-01-28 11:38:50.862974493 +0000 UTC m=+1006.657854477" watchObservedRunningTime="2026-01-28 11:38:50.868734786 +0000 UTC m=+1006.663614770" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.892529 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:50 crc kubenswrapper[4804]: E0128 11:38:50.893394 4804 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:50 crc kubenswrapper[4804]: E0128 11:38:50.893448 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert podName:a26075bd-4d23-463a-abe8-575a02ebc9ad nodeName:}" failed. No retries permitted until 2026-01-28 11:39:06.893433493 +0000 UTC m=+1022.688313477 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" (UID: "a26075bd-4d23-463a-abe8-575a02ebc9ad") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.951491 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" podStartSLOduration=3.422757638 podStartE2EDuration="16.951478931s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.38736804 +0000 UTC m=+992.182248014" lastFinishedPulling="2026-01-28 11:38:49.916089313 +0000 UTC m=+1005.710969307" observedRunningTime="2026-01-28 11:38:50.950265652 +0000 UTC m=+1006.745145636" watchObservedRunningTime="2026-01-28 11:38:50.951478931 +0000 UTC m=+1006.746358915" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.994364 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" podStartSLOduration=3.094427917 podStartE2EDuration="16.994346136s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.072461346 +0000 UTC m=+991.867341320" lastFinishedPulling="2026-01-28 11:38:49.972379555 +0000 UTC m=+1005.767259539" observedRunningTime="2026-01-28 11:38:50.982389315 +0000 UTC m=+1006.777269299" watchObservedRunningTime="2026-01-28 11:38:50.994346136 +0000 UTC m=+1006.789226120" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.042815 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" podStartSLOduration=3.627408731 podStartE2EDuration="17.042800747s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.443587879 +0000 UTC m=+992.238467863" lastFinishedPulling="2026-01-28 11:38:49.858979895 +0000 UTC m=+1005.653859879" observedRunningTime="2026-01-28 11:38:51.039499822 +0000 UTC m=+1006.834379806" watchObservedRunningTime="2026-01-28 11:38:51.042800747 +0000 UTC m=+1006.837680731" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.044578 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" podStartSLOduration=3.514635442 podStartE2EDuration="17.044572354s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.369060267 +0000 UTC m=+992.163940251" lastFinishedPulling="2026-01-28 11:38:49.898997179 +0000 UTC m=+1005.693877163" observedRunningTime="2026-01-28 11:38:51.018191874 +0000 UTC m=+1006.813071858" watchObservedRunningTime="2026-01-28 11:38:51.044572354 +0000 UTC m=+1006.839452338" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.065330 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" podStartSLOduration=3.605234016 podStartE2EDuration="17.065310004s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.437787835 +0000 UTC m=+992.232667819" lastFinishedPulling="2026-01-28 11:38:49.897863823 +0000 UTC m=+1005.692743807" observedRunningTime="2026-01-28 11:38:51.060260453 +0000 UTC m=+1006.855140437" watchObservedRunningTime="2026-01-28 11:38:51.065310004 +0000 UTC m=+1006.860189988" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.098846 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" podStartSLOduration=3.641333005 podStartE2EDuration="17.098827361s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.439428607 +0000 UTC m=+992.234308591" lastFinishedPulling="2026-01-28 11:38:49.896922963 +0000 UTC m=+1005.691802947" observedRunningTime="2026-01-28 11:38:51.098465999 +0000 UTC m=+1006.893345993" watchObservedRunningTime="2026-01-28 11:38:51.098827361 +0000 UTC m=+1006.893707345" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.126753 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" podStartSLOduration=4.064835465 podStartE2EDuration="17.126731179s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.831694373 +0000 UTC m=+992.626574357" lastFinishedPulling="2026-01-28 11:38:49.893590087 +0000 UTC m=+1005.688470071" observedRunningTime="2026-01-28 11:38:51.121452421 +0000 UTC m=+1006.916332405" watchObservedRunningTime="2026-01-28 11:38:51.126731179 +0000 UTC m=+1006.921611153" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.398771 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.398920 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:51 crc kubenswrapper[4804]: E0128 11:38:51.398971 4804 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 11:38:51 crc kubenswrapper[4804]: E0128 11:38:51.399058 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:39:07.399034967 +0000 UTC m=+1023.193914951 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "webhook-server-cert" not found Jan 28 11:38:51 crc kubenswrapper[4804]: E0128 11:38:51.399073 4804 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 11:38:51 crc kubenswrapper[4804]: E0128 11:38:51.399130 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:39:07.399112989 +0000 UTC m=+1023.193993043 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "metrics-server-cert" not found Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.785489 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:38:55 crc kubenswrapper[4804]: I0128 11:38:55.141350 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:55 crc kubenswrapper[4804]: I0128 11:38:55.150468 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:55 crc kubenswrapper[4804]: I0128 11:38:55.285485 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:55 crc kubenswrapper[4804]: I0128 11:38:55.361160 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:55 crc kubenswrapper[4804]: I0128 11:38:55.630385 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:56 crc kubenswrapper[4804]: I0128 11:38:56.824997 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" event={"ID":"b79b961c-583d-4e78-8513-c44ed292c325","Type":"ContainerStarted","Data":"2071539e06121fac45b683f4960e5f02191f22ea849321a689959060ba58da84"} Jan 28 11:38:56 crc kubenswrapper[4804]: I0128 11:38:56.825541 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:38:56 crc kubenswrapper[4804]: I0128 11:38:56.845783 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" podStartSLOduration=3.13566418 podStartE2EDuration="22.845767572s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.869745265 +0000 UTC m=+992.664625249" lastFinishedPulling="2026-01-28 11:38:56.579848657 +0000 UTC m=+1012.374728641" observedRunningTime="2026-01-28 11:38:56.842487878 +0000 UTC m=+1012.637367862" watchObservedRunningTime="2026-01-28 11:38:56.845767572 +0000 UTC m=+1012.640647556" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.834257 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" event={"ID":"eb1c01a9-6548-49cd-8e1f-4f01daaff754","Type":"ContainerStarted","Data":"8bd9abd0f32f837cd1e7801bbf3dfd8378986d84e82fe95de434217bb8de39d6"} Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.834544 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.836674 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" event={"ID":"7ab2436a-1b54-4c5e-bdc1-959026660c98","Type":"ContainerStarted","Data":"197bad519c2ba331f90a2864e8199f2d38c3e84ea8505111cbd8a5a731405ebe"} Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.837736 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.839336 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" event={"ID":"67fbb1e9-d718-4075-971a-33a245c498e3","Type":"ContainerStarted","Data":"0a011c9eaeed72b2c571ebf8dc81aca6924c6c1913857d1b5206f3479caf322e"} Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.839563 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.841162 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" event={"ID":"ff35634f-2b61-44e4-934a-74b39c5b7335","Type":"ContainerStarted","Data":"b22295c754ec883bf9af52effd5d6e7fa59e734486789257cdf0269cd120953f"} Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.841895 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.843448 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" event={"ID":"69938639-9ff0-433c-bd73-8d129935e7d4","Type":"ContainerStarted","Data":"c7d65a15f2beb57ef107124c59354137804c3bf59dab32fb2bab8f703dcec92d"} Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.865673 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" podStartSLOduration=4.599565566 podStartE2EDuration="23.865654146s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.859415976 +0000 UTC m=+992.654295960" lastFinishedPulling="2026-01-28 11:38:56.125504556 +0000 UTC m=+1011.920384540" observedRunningTime="2026-01-28 11:38:57.864376856 +0000 UTC m=+1013.659256840" watchObservedRunningTime="2026-01-28 11:38:57.865654146 +0000 UTC m=+1013.660534130" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.916264 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" podStartSLOduration=4.261166544 podStartE2EDuration="23.916237456s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.862652338 +0000 UTC m=+992.657532332" lastFinishedPulling="2026-01-28 11:38:56.51772325 +0000 UTC m=+1012.312603244" observedRunningTime="2026-01-28 11:38:57.884285649 +0000 UTC m=+1013.679165633" watchObservedRunningTime="2026-01-28 11:38:57.916237456 +0000 UTC m=+1013.711117440" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.930109 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" podStartSLOduration=4.189263077 podStartE2EDuration="23.930083367s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.839091249 +0000 UTC m=+992.633971243" lastFinishedPulling="2026-01-28 11:38:56.579911549 +0000 UTC m=+1012.374791533" observedRunningTime="2026-01-28 11:38:57.913105656 +0000 UTC m=+1013.707985680" watchObservedRunningTime="2026-01-28 11:38:57.930083367 +0000 UTC m=+1013.724963351" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.935621 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" podStartSLOduration=4.228752162 podStartE2EDuration="23.935604722s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.852293809 +0000 UTC m=+992.647173793" lastFinishedPulling="2026-01-28 11:38:56.559146369 +0000 UTC m=+1012.354026353" observedRunningTime="2026-01-28 11:38:57.93458031 +0000 UTC m=+1013.729460294" watchObservedRunningTime="2026-01-28 11:38:57.935604722 +0000 UTC m=+1013.730484706" Jan 28 11:39:01 crc kubenswrapper[4804]: I0128 11:39:01.919360 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 11:39:01 crc kubenswrapper[4804]: I0128 11:39:01.953619 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" podStartSLOduration=7.249079704 podStartE2EDuration="26.95359876s" podCreationTimestamp="2026-01-28 11:38:35 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.85453288 +0000 UTC m=+992.649412864" lastFinishedPulling="2026-01-28 11:38:56.559051936 +0000 UTC m=+1012.353931920" observedRunningTime="2026-01-28 11:38:57.954208535 +0000 UTC m=+1013.749088529" watchObservedRunningTime="2026-01-28 11:39:01.95359876 +0000 UTC m=+1017.748478744" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.702668 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.731752 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.742204 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.759618 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.799849 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.855823 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.969019 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.982777 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:39:05 crc kubenswrapper[4804]: I0128 11:39:05.169497 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:39:05 crc kubenswrapper[4804]: I0128 11:39:05.293690 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:39:05 crc kubenswrapper[4804]: I0128 11:39:05.483236 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:39:05 crc kubenswrapper[4804]: I0128 11:39:05.771960 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:39:05 crc kubenswrapper[4804]: I0128 11:39:05.790265 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:39:06 crc kubenswrapper[4804]: I0128 11:39:06.533264 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:39:06 crc kubenswrapper[4804]: I0128 11:39:06.543758 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:39:06 crc kubenswrapper[4804]: I0128 11:39:06.729571 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7v747" Jan 28 11:39:06 crc kubenswrapper[4804]: I0128 11:39:06.738197 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:39:06 crc kubenswrapper[4804]: I0128 11:39:06.939784 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:39:06 crc kubenswrapper[4804]: I0128 11:39:06.945285 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.112096 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-m8hd9" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.119542 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.179053 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2"] Jan 28 11:39:07 crc kubenswrapper[4804]: W0128 11:39:07.190246 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75f08ff_7d3c_4fb4_a366_1c996771a71d.slice/crio-007256377d87bcce0fc771874d67998d685309cd08193cf70ebe2cf298527553 WatchSource:0}: Error finding container 007256377d87bcce0fc771874d67998d685309cd08193cf70ebe2cf298527553: Status 404 returned error can't find the container with id 007256377d87bcce0fc771874d67998d685309cd08193cf70ebe2cf298527553 Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.447214 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.447294 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.451072 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.451199 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.492586 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lnb4p" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.501091 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:07 crc kubenswrapper[4804]: W0128 11:39:07.556584 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda26075bd_4d23_463a_abe8_575a02ebc9ad.slice/crio-e5cc146641de86d5bb828c31180237e006b1e4c12d4716b1100020f379ed2b17 WatchSource:0}: Error finding container e5cc146641de86d5bb828c31180237e006b1e4c12d4716b1100020f379ed2b17: Status 404 returned error can't find the container with id e5cc146641de86d5bb828c31180237e006b1e4c12d4716b1100020f379ed2b17 Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.561346 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg"] Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.726652 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc"] Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.915128 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" event={"ID":"f75f08ff-7d3c-4fb4-a366-1c996771a71d","Type":"ContainerStarted","Data":"007256377d87bcce0fc771874d67998d685309cd08193cf70ebe2cf298527553"} Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.916661 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" event={"ID":"58f748c2-ceb6-4d34-8a2e-8227e59ef560","Type":"ContainerStarted","Data":"1e544f953665c7fe834e4f2624ccece8a156709e56ca10a1f27eaaa71e3309ce"} Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.917531 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" event={"ID":"a26075bd-4d23-463a-abe8-575a02ebc9ad","Type":"ContainerStarted","Data":"e5cc146641de86d5bb828c31180237e006b1e4c12d4716b1100020f379ed2b17"} Jan 28 11:39:08 crc kubenswrapper[4804]: I0128 11:39:08.926229 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" event={"ID":"58f748c2-ceb6-4d34-8a2e-8227e59ef560","Type":"ContainerStarted","Data":"e3cf0f2cb0f2fbd8843dfb8f1bd8759058482fec6da19b034127db5e3fc2398e"} Jan 28 11:39:08 crc kubenswrapper[4804]: I0128 11:39:08.926587 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:08 crc kubenswrapper[4804]: I0128 11:39:08.952850 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" podStartSLOduration=33.952830143 podStartE2EDuration="33.952830143s" podCreationTimestamp="2026-01-28 11:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:39:08.949039702 +0000 UTC m=+1024.743919696" watchObservedRunningTime="2026-01-28 11:39:08.952830143 +0000 UTC m=+1024.747710127" Jan 28 11:39:13 crc kubenswrapper[4804]: I0128 11:39:13.969979 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" event={"ID":"a26075bd-4d23-463a-abe8-575a02ebc9ad","Type":"ContainerStarted","Data":"f03cfe72bdd8bab0fbb8940737aebd535fd0f2dc1d608dde1bc5d7dbef124231"} Jan 28 11:39:13 crc kubenswrapper[4804]: I0128 11:39:13.970523 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:39:13 crc kubenswrapper[4804]: I0128 11:39:13.971495 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" event={"ID":"f75f08ff-7d3c-4fb4-a366-1c996771a71d","Type":"ContainerStarted","Data":"a2814bc7654130ef669480e154b22a349677f98f5b93509bddd76348efb0826e"} Jan 28 11:39:13 crc kubenswrapper[4804]: I0128 11:39:13.971551 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:39:13 crc kubenswrapper[4804]: I0128 11:39:13.973042 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" event={"ID":"8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1","Type":"ContainerStarted","Data":"1ff2f0b9fe55e9903b74563ce7c5e858365452350570fedfd3b61f25bcca9b0b"} Jan 28 11:39:13 crc kubenswrapper[4804]: I0128 11:39:13.973231 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:39:14 crc kubenswrapper[4804]: I0128 11:39:14.001838 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" podStartSLOduration=34.255202407 podStartE2EDuration="40.001822237s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:39:07.5695002 +0000 UTC m=+1023.364380184" lastFinishedPulling="2026-01-28 11:39:13.31612003 +0000 UTC m=+1029.111000014" observedRunningTime="2026-01-28 11:39:13.996496368 +0000 UTC m=+1029.791376372" watchObservedRunningTime="2026-01-28 11:39:14.001822237 +0000 UTC m=+1029.796702211" Jan 28 11:39:14 crc kubenswrapper[4804]: I0128 11:39:14.014705 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" podStartSLOduration=33.891886872 podStartE2EDuration="40.014683897s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:39:07.192226461 +0000 UTC m=+1022.987106445" lastFinishedPulling="2026-01-28 11:39:13.315023486 +0000 UTC m=+1029.109903470" observedRunningTime="2026-01-28 11:39:14.01322809 +0000 UTC m=+1029.808108074" watchObservedRunningTime="2026-01-28 11:39:14.014683897 +0000 UTC m=+1029.809563891" Jan 28 11:39:14 crc kubenswrapper[4804]: I0128 11:39:14.030421 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" podStartSLOduration=4.326266466 podStartE2EDuration="40.030400387s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.435018446 +0000 UTC m=+992.229898430" lastFinishedPulling="2026-01-28 11:39:12.139152367 +0000 UTC m=+1027.934032351" observedRunningTime="2026-01-28 11:39:14.026450672 +0000 UTC m=+1029.821330656" watchObservedRunningTime="2026-01-28 11:39:14.030400387 +0000 UTC m=+1029.825280371" Jan 28 11:39:17 crc kubenswrapper[4804]: I0128 11:39:17.508714 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:25 crc kubenswrapper[4804]: I0128 11:39:25.227816 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:39:26 crc kubenswrapper[4804]: I0128 11:39:26.746580 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:39:27 crc kubenswrapper[4804]: I0128 11:39:27.126856 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.204997 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2gsvc"] Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.210394 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.214422 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-87989" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.214736 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.214769 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.214984 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.219718 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2gsvc"] Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.264304 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69x8l"] Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.266079 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.273822 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.276599 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69x8l"] Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.299559 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk9ql\" (UniqueName: \"kubernetes.io/projected/6cc67125-e00e-437f-aa24-de4207035567-kube-api-access-lk9ql\") pod \"dnsmasq-dns-675f4bcbfc-2gsvc\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.299650 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc67125-e00e-437f-aa24-de4207035567-config\") pod \"dnsmasq-dns-675f4bcbfc-2gsvc\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.400919 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc67125-e00e-437f-aa24-de4207035567-config\") pod \"dnsmasq-dns-675f4bcbfc-2gsvc\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.400992 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-config\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.401076 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.401106 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk9ql\" (UniqueName: \"kubernetes.io/projected/6cc67125-e00e-437f-aa24-de4207035567-kube-api-access-lk9ql\") pod \"dnsmasq-dns-675f4bcbfc-2gsvc\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.401149 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccx54\" (UniqueName: \"kubernetes.io/projected/4a7a23b8-853e-4c7e-8865-b4857330ae7a-kube-api-access-ccx54\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.402032 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc67125-e00e-437f-aa24-de4207035567-config\") pod \"dnsmasq-dns-675f4bcbfc-2gsvc\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.424227 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk9ql\" (UniqueName: \"kubernetes.io/projected/6cc67125-e00e-437f-aa24-de4207035567-kube-api-access-lk9ql\") pod \"dnsmasq-dns-675f4bcbfc-2gsvc\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.502815 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.502920 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccx54\" (UniqueName: \"kubernetes.io/projected/4a7a23b8-853e-4c7e-8865-b4857330ae7a-kube-api-access-ccx54\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.502966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-config\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.503704 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.503806 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-config\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.523491 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccx54\" (UniqueName: \"kubernetes.io/projected/4a7a23b8-853e-4c7e-8865-b4857330ae7a-kube-api-access-ccx54\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.540253 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.583872 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.788106 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2gsvc"] Jan 28 11:39:42 crc kubenswrapper[4804]: W0128 11:39:42.078448 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a7a23b8_853e_4c7e_8865_b4857330ae7a.slice/crio-ff14b73376c08a9a042ed15e0e4d81fca52289921ebc9fbc4e5045162772a04e WatchSource:0}: Error finding container ff14b73376c08a9a042ed15e0e4d81fca52289921ebc9fbc4e5045162772a04e: Status 404 returned error can't find the container with id ff14b73376c08a9a042ed15e0e4d81fca52289921ebc9fbc4e5045162772a04e Jan 28 11:39:42 crc kubenswrapper[4804]: I0128 11:39:42.081294 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69x8l"] Jan 28 11:39:42 crc kubenswrapper[4804]: I0128 11:39:42.169773 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" event={"ID":"4a7a23b8-853e-4c7e-8865-b4857330ae7a","Type":"ContainerStarted","Data":"ff14b73376c08a9a042ed15e0e4d81fca52289921ebc9fbc4e5045162772a04e"} Jan 28 11:39:42 crc kubenswrapper[4804]: I0128 11:39:42.170841 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" event={"ID":"6cc67125-e00e-437f-aa24-de4207035567","Type":"ContainerStarted","Data":"80b9eeef4de23f3be32b5f9a2473b5327cc05d25c610781a1840375e074ddb02"} Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.470407 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2gsvc"] Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.500319 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6pb25"] Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.501428 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.516110 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6pb25"] Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.644631 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.644681 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-config\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.644713 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9pp\" (UniqueName: \"kubernetes.io/projected/303230dd-ae75-4c0f-abb8-be1086a098c5-kube-api-access-8t9pp\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.746101 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.746141 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-config\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.746169 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t9pp\" (UniqueName: \"kubernetes.io/projected/303230dd-ae75-4c0f-abb8-be1086a098c5-kube-api-access-8t9pp\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.747290 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-config\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.748641 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.785684 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t9pp\" (UniqueName: \"kubernetes.io/projected/303230dd-ae75-4c0f-abb8-be1086a098c5-kube-api-access-8t9pp\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.824020 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.166943 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6pb25"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.177236 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69x8l"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.220486 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xc7n9"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.225062 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.234863 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xc7n9"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.360463 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.360508 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-config\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.360789 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdvcs\" (UniqueName: \"kubernetes.io/projected/2bf63c78-fb1d-4777-9643-0923cf3a4c57-kube-api-access-fdvcs\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.463047 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdvcs\" (UniqueName: \"kubernetes.io/projected/2bf63c78-fb1d-4777-9643-0923cf3a4c57-kube-api-access-fdvcs\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.463723 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.463757 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-config\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.464933 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-config\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.465388 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.501173 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdvcs\" (UniqueName: \"kubernetes.io/projected/2bf63c78-fb1d-4777-9643-0923cf3a4c57-kube-api-access-fdvcs\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.560543 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.643867 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.645419 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.649238 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.652922 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.653123 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.653318 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-lq4ln" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.653323 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.653371 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.653495 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.658325 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.781625 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.781676 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.781704 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.781733 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.781866 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.781973 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76d127f1-97d9-4552-9bdb-b3482a45951d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.782020 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76d127f1-97d9-4552-9bdb-b3482a45951d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.782053 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.782181 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzs9j\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-kube-api-access-gzs9j\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.782292 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.782341 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884449 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76d127f1-97d9-4552-9bdb-b3482a45951d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884500 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884534 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzs9j\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-kube-api-access-gzs9j\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884576 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884602 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884662 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884686 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884714 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884749 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884767 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884788 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76d127f1-97d9-4552-9bdb-b3482a45951d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.886542 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.887697 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.892807 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.894041 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.895684 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.898237 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.898668 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.916395 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76d127f1-97d9-4552-9bdb-b3482a45951d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.917431 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76d127f1-97d9-4552-9bdb-b3482a45951d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.918270 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzs9j\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-kube-api-access-gzs9j\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.929272 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.940618 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xc7n9"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.951364 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.969757 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.225901 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" event={"ID":"2bf63c78-fb1d-4777-9643-0923cf3a4c57","Type":"ContainerStarted","Data":"d7d8077111dc71deae67122e06d45da608d04892783ac52603e5acfd01f98f37"} Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.227784 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" event={"ID":"303230dd-ae75-4c0f-abb8-be1086a098c5","Type":"ContainerStarted","Data":"fb24c3a897ceddd1a2c22ed7950667aa2df40c1a865bdacebfbaa2864376b059"} Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.383709 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.401965 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.411837 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.415740 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gp8xk" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.416281 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.418041 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.418245 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.418289 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.418353 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.427830 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.428164 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.501609 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.501655 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.501730 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.501753 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhxr\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-kube-api-access-wqhxr\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.501867 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.501997 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7c5c969-c4c2-4f76-b3c6-152473159e78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.502064 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.502169 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.502259 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7c5c969-c4c2-4f76-b3c6-152473159e78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.502288 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.502374 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604060 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7c5c969-c4c2-4f76-b3c6-152473159e78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604117 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604153 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604183 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604206 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604278 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604310 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqhxr\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-kube-api-access-wqhxr\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604338 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604361 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7c5c969-c4c2-4f76-b3c6-152473159e78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604396 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604439 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.608173 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.611106 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.611373 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.611706 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.611777 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.614397 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.622169 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.622755 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7c5c969-c4c2-4f76-b3c6-152473159e78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.624249 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.641928 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7c5c969-c4c2-4f76-b3c6-152473159e78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.644846 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqhxr\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-kube-api-access-wqhxr\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.670388 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.746804 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.264612 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76d127f1-97d9-4552-9bdb-b3482a45951d","Type":"ContainerStarted","Data":"304507b474cdd7086e7df033bc16291530ac6b5f55a2e85e565b86562e7fde59"} Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.794651 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.801856 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.810124 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vwdrn" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.811208 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.815340 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.815600 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.838734 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.850746 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936199 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936242 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936264 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-kolla-config\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936283 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936740 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-default\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936789 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936828 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb78m\" (UniqueName: \"kubernetes.io/projected/33d56e9c-416a-4816-81a7-8def89c20c8e-kube-api-access-fb78m\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936872 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.038872 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb78m\" (UniqueName: \"kubernetes.io/projected/33d56e9c-416a-4816-81a7-8def89c20c8e-kube-api-access-fb78m\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.038964 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.038991 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.039007 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-kolla-config\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.039021 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.039038 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.039079 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-default\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.039118 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.041097 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.042276 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-default\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.042574 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.042704 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.043171 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-kolla-config\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.059147 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.061202 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.072451 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.076603 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb78m\" (UniqueName: \"kubernetes.io/projected/33d56e9c-416a-4816-81a7-8def89c20c8e-kube-api-access-fb78m\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.154647 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.103701 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.116381 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.122339 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.124188 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.124588 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rk5hn" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.128552 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.133195 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.177696 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.178909 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.179052 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.179113 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.179160 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.179239 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.179275 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gt7\" (UniqueName: \"kubernetes.io/projected/24549b02-2977-49ee-8f25-a6ed25e523d1-kube-api-access-r8gt7\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.179400 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.280970 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281044 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281083 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281139 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281182 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281213 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281324 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281356 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gt7\" (UniqueName: \"kubernetes.io/projected/24549b02-2977-49ee-8f25-a6ed25e523d1-kube-api-access-r8gt7\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.282387 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.283864 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.345873 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.350278 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.350418 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.397189 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.397476 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.398527 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gt7\" (UniqueName: \"kubernetes.io/projected/24549b02-2977-49ee-8f25-a6ed25e523d1-kube-api-access-r8gt7\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.409546 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.465984 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.505568 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.506603 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.515102 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-zbd85" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.515402 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.515685 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.525544 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.588647 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.588699 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxcf\" (UniqueName: \"kubernetes.io/projected/d47089ce-8b52-4bd3-a30e-04736fed01fc-kube-api-access-flxcf\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.588753 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-config-data\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.588787 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-kolla-config\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.588827 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.690341 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-kolla-config\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.690444 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.690504 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.690529 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flxcf\" (UniqueName: \"kubernetes.io/projected/d47089ce-8b52-4bd3-a30e-04736fed01fc-kube-api-access-flxcf\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.690597 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-config-data\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.691744 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-config-data\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.692431 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-kolla-config\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.697571 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.718210 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.727275 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flxcf\" (UniqueName: \"kubernetes.io/projected/d47089ce-8b52-4bd3-a30e-04736fed01fc-kube-api-access-flxcf\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.828929 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.308537 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.310931 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.315076 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-t7p54" Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.332943 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.433814 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r249v\" (UniqueName: \"kubernetes.io/projected/97a6e239-25e0-4962-8c9d-4751ca2f4b1d-kube-api-access-r249v\") pod \"kube-state-metrics-0\" (UID: \"97a6e239-25e0-4962-8c9d-4751ca2f4b1d\") " pod="openstack/kube-state-metrics-0" Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.535526 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r249v\" (UniqueName: \"kubernetes.io/projected/97a6e239-25e0-4962-8c9d-4751ca2f4b1d-kube-api-access-r249v\") pod \"kube-state-metrics-0\" (UID: \"97a6e239-25e0-4962-8c9d-4751ca2f4b1d\") " pod="openstack/kube-state-metrics-0" Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.566829 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r249v\" (UniqueName: \"kubernetes.io/projected/97a6e239-25e0-4962-8c9d-4751ca2f4b1d-kube-api-access-r249v\") pod \"kube-state-metrics-0\" (UID: \"97a6e239-25e0-4962-8c9d-4751ca2f4b1d\") " pod="openstack/kube-state-metrics-0" Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.640449 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.657644 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xtdr8"] Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.660356 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.663285 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9cq62" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.666571 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.667547 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680128 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680179 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-ovn-controller-tls-certs\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680207 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frdmr\" (UniqueName: \"kubernetes.io/projected/ec6a5a02-2cbe-421b-bcf5-54572e000f28-kube-api-access-frdmr\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680287 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-combined-ca-bundle\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680332 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-log-ovn\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680365 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run-ovn\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680420 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec6a5a02-2cbe-421b-bcf5-54572e000f28-scripts\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.688048 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xtdr8"] Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.746785 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-pfzkj"] Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.748497 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.754697 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pfzkj"] Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786341 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djwbx\" (UniqueName: \"kubernetes.io/projected/9d301959-ed06-4b22-8e97-f3fc9a9bc491-kube-api-access-djwbx\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786442 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec6a5a02-2cbe-421b-bcf5-54572e000f28-scripts\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786497 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-log\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786546 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786574 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-ovn-controller-tls-certs\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786600 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frdmr\" (UniqueName: \"kubernetes.io/projected/ec6a5a02-2cbe-421b-bcf5-54572e000f28-kube-api-access-frdmr\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786633 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-run\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786673 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-combined-ca-bundle\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786731 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-log-ovn\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786757 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d301959-ed06-4b22-8e97-f3fc9a9bc491-scripts\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786792 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run-ovn\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786821 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-lib\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786848 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-etc-ovs\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.788346 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.789213 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run-ovn\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.789383 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-log-ovn\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.791449 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec6a5a02-2cbe-421b-bcf5-54572e000f28-scripts\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.799581 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-combined-ca-bundle\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.800184 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-ovn-controller-tls-certs\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.822670 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frdmr\" (UniqueName: \"kubernetes.io/projected/ec6a5a02-2cbe-421b-bcf5-54572e000f28-kube-api-access-frdmr\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.892768 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-lib\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.893040 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-etc-ovs\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.893145 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djwbx\" (UniqueName: \"kubernetes.io/projected/9d301959-ed06-4b22-8e97-f3fc9a9bc491-kube-api-access-djwbx\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.893256 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-log\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.893354 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-run\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.893465 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d301959-ed06-4b22-8e97-f3fc9a9bc491-scripts\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.894148 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-lib\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.894347 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-etc-ovs\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.894411 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-log\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.894513 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-run\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.896769 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d301959-ed06-4b22-8e97-f3fc9a9bc491-scripts\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.921447 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djwbx\" (UniqueName: \"kubernetes.io/projected/9d301959-ed06-4b22-8e97-f3fc9a9bc491-kube-api-access-djwbx\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.987751 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:53 crc kubenswrapper[4804]: I0128 11:39:53.064087 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.605817 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.607336 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.611728 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-q2swt" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.612066 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.612226 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.612063 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.612406 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.678493 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.729778 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.729858 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.729920 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.729958 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.730056 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.730090 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9rrd\" (UniqueName: \"kubernetes.io/projected/c6c76352-2487-4098-bbee-579834052292-kube-api-access-x9rrd\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.730120 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6c76352-2487-4098-bbee-579834052292-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.730144 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-config\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.832934 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9rrd\" (UniqueName: \"kubernetes.io/projected/c6c76352-2487-4098-bbee-579834052292-kube-api-access-x9rrd\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833004 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833052 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6c76352-2487-4098-bbee-579834052292-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833072 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-config\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833176 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833233 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833286 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833321 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833761 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6c76352-2487-4098-bbee-579834052292-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.834036 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.834557 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-config\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.835247 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.841183 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.842505 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.844220 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.857832 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9rrd\" (UniqueName: \"kubernetes.io/projected/c6c76352-2487-4098-bbee-579834052292-kube-api-access-x9rrd\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.874257 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.943954 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.628584 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.640558 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.643724 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.644042 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.644287 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.644723 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mmrcs" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.652202 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.796924 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797016 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797093 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797138 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797171 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797207 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797248 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-config\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797273 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp6hq\" (UniqueName: \"kubernetes.io/projected/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-kube-api-access-lp6hq\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899328 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899420 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899465 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899509 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899556 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp6hq\" (UniqueName: \"kubernetes.io/projected/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-kube-api-access-lp6hq\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899580 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-config\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899647 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899680 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899794 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.903308 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-config\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.903678 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.905357 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.906235 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.911231 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.917656 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp6hq\" (UniqueName: \"kubernetes.io/projected/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-kube-api-access-lp6hq\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.917773 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.924138 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.992542 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.070396 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.071269 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lk9ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-2gsvc_openstack(6cc67125-e00e-437f-aa24-de4207035567): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.073088 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" podUID="6cc67125-e00e-437f-aa24-de4207035567" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.132410 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.132920 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccx54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-69x8l_openstack(4a7a23b8-853e-4c7e-8865-b4857330ae7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.134142 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" podUID="4a7a23b8-853e-4c7e-8865-b4857330ae7a" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.298022 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.298199 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdvcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-xc7n9_openstack(2bf63c78-fb1d-4777-9643-0923cf3a4c57): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.299665 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.460675 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 11:40:08 crc kubenswrapper[4804]: W0128 11:40:08.475390 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24549b02_2977_49ee_8f25_a6ed25e523d1.slice/crio-b7ae3c5cd3fc37a1e5fff03dc9d1c7b30b19148db61f795f2d045d947ed549b4 WatchSource:0}: Error finding container b7ae3c5cd3fc37a1e5fff03dc9d1c7b30b19148db61f795f2d045d947ed549b4: Status 404 returned error can't find the container with id b7ae3c5cd3fc37a1e5fff03dc9d1c7b30b19148db61f795f2d045d947ed549b4 Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.507862 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24549b02-2977-49ee-8f25-a6ed25e523d1","Type":"ContainerStarted","Data":"b7ae3c5cd3fc37a1e5fff03dc9d1c7b30b19148db61f795f2d045d947ed549b4"} Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.509725 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.563115 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.589378 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.673639 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.674211 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8t9pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-6pb25_openstack(303230dd-ae75-4c0f-abb8-be1086a098c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.676057 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.718084 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xtdr8"] Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.764001 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.868343 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.961590 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pfzkj"] Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.017058 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.033408 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.147346 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.176311 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-dns-svc\") pod \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.176462 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccx54\" (UniqueName: \"kubernetes.io/projected/4a7a23b8-853e-4c7e-8865-b4857330ae7a-kube-api-access-ccx54\") pod \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.176593 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk9ql\" (UniqueName: \"kubernetes.io/projected/6cc67125-e00e-437f-aa24-de4207035567-kube-api-access-lk9ql\") pod \"6cc67125-e00e-437f-aa24-de4207035567\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.176626 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc67125-e00e-437f-aa24-de4207035567-config\") pod \"6cc67125-e00e-437f-aa24-de4207035567\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.176653 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-config\") pod \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.177549 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc67125-e00e-437f-aa24-de4207035567-config" (OuterVolumeSpecName: "config") pod "6cc67125-e00e-437f-aa24-de4207035567" (UID: "6cc67125-e00e-437f-aa24-de4207035567"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.177558 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a7a23b8-853e-4c7e-8865-b4857330ae7a" (UID: "4a7a23b8-853e-4c7e-8865-b4857330ae7a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.179027 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-config" (OuterVolumeSpecName: "config") pod "4a7a23b8-853e-4c7e-8865-b4857330ae7a" (UID: "4a7a23b8-853e-4c7e-8865-b4857330ae7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.182440 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a7a23b8-853e-4c7e-8865-b4857330ae7a-kube-api-access-ccx54" (OuterVolumeSpecName: "kube-api-access-ccx54") pod "4a7a23b8-853e-4c7e-8865-b4857330ae7a" (UID: "4a7a23b8-853e-4c7e-8865-b4857330ae7a"). InnerVolumeSpecName "kube-api-access-ccx54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.246578 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc67125-e00e-437f-aa24-de4207035567-kube-api-access-lk9ql" (OuterVolumeSpecName: "kube-api-access-lk9ql") pod "6cc67125-e00e-437f-aa24-de4207035567" (UID: "6cc67125-e00e-437f-aa24-de4207035567"). InnerVolumeSpecName "kube-api-access-lk9ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.279507 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.279556 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccx54\" (UniqueName: \"kubernetes.io/projected/4a7a23b8-853e-4c7e-8865-b4857330ae7a-kube-api-access-ccx54\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.279570 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk9ql\" (UniqueName: \"kubernetes.io/projected/6cc67125-e00e-437f-aa24-de4207035567-kube-api-access-lk9ql\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.279582 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc67125-e00e-437f-aa24-de4207035567-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.279594 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.517503 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8" event={"ID":"ec6a5a02-2cbe-421b-bcf5-54572e000f28","Type":"ContainerStarted","Data":"33b738bafa7ea125cb6f8e21be749a37e8dc0b050b5dffa31b3e9875c08ddd2d"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.519296 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d47089ce-8b52-4bd3-a30e-04736fed01fc","Type":"ContainerStarted","Data":"86818d705a40c4508845f5e3530cd1a2ecd08917ac1287e69fd364a076602c00"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.520275 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"50c4ac86-3241-4cd1-aa15-9a36b6be1e03","Type":"ContainerStarted","Data":"25c9a781686743f7412ee94f0767d676a774f06512184aef56e510538efe72e7"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.521779 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7c5c969-c4c2-4f76-b3c6-152473159e78","Type":"ContainerStarted","Data":"a5146612f4e2d80705681617c2e405b8c7dbe80637772da2d39bae9bb807359c"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.522975 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33d56e9c-416a-4816-81a7-8def89c20c8e","Type":"ContainerStarted","Data":"a6f77cd6c96b39492fe76acbd919310cca2dbd61ed6cf94d721e54f9cb0227d1"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.524255 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97a6e239-25e0-4962-8c9d-4751ca2f4b1d","Type":"ContainerStarted","Data":"e28d6e15bb8b7864184a210b8a21979cfee4c6a5d5b942d21fe32b6ed7b6e02c"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.525762 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" event={"ID":"4a7a23b8-853e-4c7e-8865-b4857330ae7a","Type":"ContainerDied","Data":"ff14b73376c08a9a042ed15e0e4d81fca52289921ebc9fbc4e5045162772a04e"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.525839 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.527051 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.527000 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" event={"ID":"6cc67125-e00e-437f-aa24-de4207035567","Type":"ContainerDied","Data":"80b9eeef4de23f3be32b5f9a2473b5327cc05d25c610781a1840375e074ddb02"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.528093 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerStarted","Data":"2ef238b63ba108007593ebb8599aaea3fae02c4b5040dd8085355ce0141a6ab3"} Jan 28 11:40:09 crc kubenswrapper[4804]: E0128 11:40:09.530372 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.609397 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2gsvc"] Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.624980 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2gsvc"] Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.642919 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69x8l"] Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.647230 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69x8l"] Jan 28 11:40:10 crc kubenswrapper[4804]: I0128 11:40:10.075053 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 11:40:10 crc kubenswrapper[4804]: W0128 11:40:10.083057 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c76352_2487_4098_bbee_579834052292.slice/crio-d26150d6a52bf056130226acaed9cb7292060f7026f2494687c5bc4ee4c04771 WatchSource:0}: Error finding container d26150d6a52bf056130226acaed9cb7292060f7026f2494687c5bc4ee4c04771: Status 404 returned error can't find the container with id d26150d6a52bf056130226acaed9cb7292060f7026f2494687c5bc4ee4c04771 Jan 28 11:40:10 crc kubenswrapper[4804]: I0128 11:40:10.541048 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c6c76352-2487-4098-bbee-579834052292","Type":"ContainerStarted","Data":"d26150d6a52bf056130226acaed9cb7292060f7026f2494687c5bc4ee4c04771"} Jan 28 11:40:10 crc kubenswrapper[4804]: I0128 11:40:10.930582 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a7a23b8-853e-4c7e-8865-b4857330ae7a" path="/var/lib/kubelet/pods/4a7a23b8-853e-4c7e-8865-b4857330ae7a/volumes" Jan 28 11:40:10 crc kubenswrapper[4804]: I0128 11:40:10.931535 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc67125-e00e-437f-aa24-de4207035567" path="/var/lib/kubelet/pods/6cc67125-e00e-437f-aa24-de4207035567/volumes" Jan 28 11:40:11 crc kubenswrapper[4804]: I0128 11:40:11.550225 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7c5c969-c4c2-4f76-b3c6-152473159e78","Type":"ContainerStarted","Data":"b936b1f85b5d914a16d472ff712a5db48c0674a29e82c956ccf023610946a7cb"} Jan 28 11:40:11 crc kubenswrapper[4804]: I0128 11:40:11.554635 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76d127f1-97d9-4552-9bdb-b3482a45951d","Type":"ContainerStarted","Data":"938917cd0b60c23765326c3b0e216a34a5756c286f26d1223873445f92cad09a"} Jan 28 11:40:12 crc kubenswrapper[4804]: I0128 11:40:12.582798 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:40:12 crc kubenswrapper[4804]: I0128 11:40:12.583150 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.619716 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24549b02-2977-49ee-8f25-a6ed25e523d1","Type":"ContainerStarted","Data":"71511ac2cacaf27ae221597c51e8a13319dc222d2cd450901bd6db686f0e4b92"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.622656 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97a6e239-25e0-4962-8c9d-4751ca2f4b1d","Type":"ContainerStarted","Data":"bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.622808 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.625247 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8" event={"ID":"ec6a5a02-2cbe-421b-bcf5-54572e000f28","Type":"ContainerStarted","Data":"4a2eea6008d67570b3d18ca463796d41c0886d498dab2d5b7ee01d2e5f0bd61d"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.625464 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xtdr8" Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.627740 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerStarted","Data":"67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.629411 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d47089ce-8b52-4bd3-a30e-04736fed01fc","Type":"ContainerStarted","Data":"386c42bab4089fa2791b36fa5e66b769af00f0ef8e73fa961d1e6e6f38f01759"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.629564 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.631069 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c6c76352-2487-4098-bbee-579834052292","Type":"ContainerStarted","Data":"445cd2aea23cf7159b1e5bbce268d62cd1c9a1d5072f21d98a9181a420bf2e56"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.632801 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"50c4ac86-3241-4cd1-aa15-9a36b6be1e03","Type":"ContainerStarted","Data":"1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.634400 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33d56e9c-416a-4816-81a7-8def89c20c8e","Type":"ContainerStarted","Data":"111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.663801 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=19.297823707 podStartE2EDuration="29.663775846s" podCreationTimestamp="2026-01-28 11:39:50 +0000 UTC" firstStartedPulling="2026-01-28 11:40:08.878329354 +0000 UTC m=+1084.673209338" lastFinishedPulling="2026-01-28 11:40:19.244281483 +0000 UTC m=+1095.039161477" observedRunningTime="2026-01-28 11:40:19.660432089 +0000 UTC m=+1095.455312093" watchObservedRunningTime="2026-01-28 11:40:19.663775846 +0000 UTC m=+1095.458655830" Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.707450 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=22.310888543 podStartE2EDuration="31.707434225s" podCreationTimestamp="2026-01-28 11:39:48 +0000 UTC" firstStartedPulling="2026-01-28 11:40:08.776439101 +0000 UTC m=+1084.571319085" lastFinishedPulling="2026-01-28 11:40:18.172984773 +0000 UTC m=+1093.967864767" observedRunningTime="2026-01-28 11:40:19.705521545 +0000 UTC m=+1095.500401529" watchObservedRunningTime="2026-01-28 11:40:19.707434225 +0000 UTC m=+1095.502314199" Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.735584 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xtdr8" podStartSLOduration=18.445745976 podStartE2EDuration="27.735565371s" podCreationTimestamp="2026-01-28 11:39:52 +0000 UTC" firstStartedPulling="2026-01-28 11:40:08.729308511 +0000 UTC m=+1084.524188495" lastFinishedPulling="2026-01-28 11:40:18.019127906 +0000 UTC m=+1093.814007890" observedRunningTime="2026-01-28 11:40:19.730041615 +0000 UTC m=+1095.524921609" watchObservedRunningTime="2026-01-28 11:40:19.735565371 +0000 UTC m=+1095.530445355" Jan 28 11:40:20 crc kubenswrapper[4804]: I0128 11:40:20.660530 4804 generic.go:334] "Generic (PLEG): container finished" podID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerID="67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d" exitCode=0 Jan 28 11:40:20 crc kubenswrapper[4804]: I0128 11:40:20.660765 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerDied","Data":"67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d"} Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.670009 4804 generic.go:334] "Generic (PLEG): container finished" podID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerID="947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca" exitCode=0 Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.670103 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" event={"ID":"2bf63c78-fb1d-4777-9643-0923cf3a4c57","Type":"ContainerDied","Data":"947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca"} Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.677696 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerStarted","Data":"27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067"} Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.677734 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerStarted","Data":"b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784"} Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.677912 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.677969 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.719861 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-pfzkj" podStartSLOduration=20.670562666 podStartE2EDuration="29.719841973s" podCreationTimestamp="2026-01-28 11:39:52 +0000 UTC" firstStartedPulling="2026-01-28 11:40:08.970930802 +0000 UTC m=+1084.765810786" lastFinishedPulling="2026-01-28 11:40:18.020210099 +0000 UTC m=+1093.815090093" observedRunningTime="2026-01-28 11:40:21.712712146 +0000 UTC m=+1097.507592130" watchObservedRunningTime="2026-01-28 11:40:21.719841973 +0000 UTC m=+1097.514721957" Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.686899 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" event={"ID":"2bf63c78-fb1d-4777-9643-0923cf3a4c57","Type":"ContainerStarted","Data":"a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300"} Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.687718 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.688964 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c6c76352-2487-4098-bbee-579834052292","Type":"ContainerStarted","Data":"083be3913b9cea293776996ed70c579f5b987734d7d6618ce37907eb76d96885"} Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.690770 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"50c4ac86-3241-4cd1-aa15-9a36b6be1e03","Type":"ContainerStarted","Data":"7501d75daa32f7ac9da494ff4510c6c7b84e72c6cd5d7a36b873ba97e31ca357"} Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.754316 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" podStartSLOduration=3.2275082250000002 podStartE2EDuration="38.7542951s" podCreationTimestamp="2026-01-28 11:39:44 +0000 UTC" firstStartedPulling="2026-01-28 11:39:44.974657755 +0000 UTC m=+1060.769537749" lastFinishedPulling="2026-01-28 11:40:20.50144464 +0000 UTC m=+1096.296324624" observedRunningTime="2026-01-28 11:40:22.715987611 +0000 UTC m=+1098.510867605" watchObservedRunningTime="2026-01-28 11:40:22.7542951 +0000 UTC m=+1098.549175084" Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.758269 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.408143301 podStartE2EDuration="29.758263577s" podCreationTimestamp="2026-01-28 11:39:53 +0000 UTC" firstStartedPulling="2026-01-28 11:40:10.085142599 +0000 UTC m=+1085.880022583" lastFinishedPulling="2026-01-28 11:40:22.435262875 +0000 UTC m=+1098.230142859" observedRunningTime="2026-01-28 11:40:22.749536129 +0000 UTC m=+1098.544416143" watchObservedRunningTime="2026-01-28 11:40:22.758263577 +0000 UTC m=+1098.553143561" Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.776129 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.503354619 podStartE2EDuration="26.776108484s" podCreationTimestamp="2026-01-28 11:39:56 +0000 UTC" firstStartedPulling="2026-01-28 11:40:09.146693707 +0000 UTC m=+1084.941573681" lastFinishedPulling="2026-01-28 11:40:22.419447562 +0000 UTC m=+1098.214327546" observedRunningTime="2026-01-28 11:40:22.773729269 +0000 UTC m=+1098.568609273" watchObservedRunningTime="2026-01-28 11:40:22.776108484 +0000 UTC m=+1098.570988468" Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.993208 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 11:40:23 crc kubenswrapper[4804]: I0128 11:40:23.718473 4804 generic.go:334] "Generic (PLEG): container finished" podID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerID="111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340" exitCode=0 Jan 28 11:40:23 crc kubenswrapper[4804]: I0128 11:40:23.718569 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33d56e9c-416a-4816-81a7-8def89c20c8e","Type":"ContainerDied","Data":"111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340"} Jan 28 11:40:23 crc kubenswrapper[4804]: I0128 11:40:23.724540 4804 generic.go:334] "Generic (PLEG): container finished" podID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerID="71511ac2cacaf27ae221597c51e8a13319dc222d2cd450901bd6db686f0e4b92" exitCode=0 Jan 28 11:40:23 crc kubenswrapper[4804]: I0128 11:40:23.725112 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24549b02-2977-49ee-8f25-a6ed25e523d1","Type":"ContainerDied","Data":"71511ac2cacaf27ae221597c51e8a13319dc222d2cd450901bd6db686f0e4b92"} Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.736210 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33d56e9c-416a-4816-81a7-8def89c20c8e","Type":"ContainerStarted","Data":"5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361"} Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.739145 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24549b02-2977-49ee-8f25-a6ed25e523d1","Type":"ContainerStarted","Data":"351711c020c75334855ec428e2d1987910c3ce0fc9fe965d8ca2c554f8fb0ae9"} Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.764841 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.338695669 podStartE2EDuration="39.764821772s" podCreationTimestamp="2026-01-28 11:39:45 +0000 UTC" firstStartedPulling="2026-01-28 11:40:08.592766375 +0000 UTC m=+1084.387646359" lastFinishedPulling="2026-01-28 11:40:18.018892478 +0000 UTC m=+1093.813772462" observedRunningTime="2026-01-28 11:40:24.762645573 +0000 UTC m=+1100.557525587" watchObservedRunningTime="2026-01-28 11:40:24.764821772 +0000 UTC m=+1100.559701756" Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.804199 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.982887772 podStartE2EDuration="37.804180874s" podCreationTimestamp="2026-01-28 11:39:47 +0000 UTC" firstStartedPulling="2026-01-28 11:40:08.47953213 +0000 UTC m=+1084.274412114" lastFinishedPulling="2026-01-28 11:40:18.300825232 +0000 UTC m=+1094.095705216" observedRunningTime="2026-01-28 11:40:24.794957191 +0000 UTC m=+1100.589837205" watchObservedRunningTime="2026-01-28 11:40:24.804180874 +0000 UTC m=+1100.599060868" Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.945182 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.945296 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.993159 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 11:40:25 crc kubenswrapper[4804]: I0128 11:40:25.018917 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 11:40:25 crc kubenswrapper[4804]: I0128 11:40:25.045284 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 11:40:25 crc kubenswrapper[4804]: I0128 11:40:25.759746 4804 generic.go:334] "Generic (PLEG): container finished" podID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerID="9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084" exitCode=0 Jan 28 11:40:25 crc kubenswrapper[4804]: I0128 11:40:25.759820 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" event={"ID":"303230dd-ae75-4c0f-abb8-be1086a098c5","Type":"ContainerDied","Data":"9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084"} Jan 28 11:40:25 crc kubenswrapper[4804]: I0128 11:40:25.806291 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 11:40:25 crc kubenswrapper[4804]: I0128 11:40:25.814148 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.073348 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6pb25"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.106698 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p7wz6"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.108020 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.110100 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.133856 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p7wz6"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.148556 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.148701 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-config\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.148738 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.148794 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5l6b\" (UniqueName: \"kubernetes.io/projected/2a94ea74-636e-4cb7-803b-01e91be31160-kube-api-access-z5l6b\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.229192 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-gtg97"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.230881 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.234034 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.251214 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-config\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.251511 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.251818 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5l6b\" (UniqueName: \"kubernetes.io/projected/2a94ea74-636e-4cb7-803b-01e91be31160-kube-api-access-z5l6b\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.251966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.253507 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.253649 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-config\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.254481 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.256427 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gtg97"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.283044 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5l6b\" (UniqueName: \"kubernetes.io/projected/2a94ea74-636e-4cb7-803b-01e91be31160-kube-api-access-z5l6b\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.310388 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.312461 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.315367 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.315678 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jztfn" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.319485 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.323692 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.325617 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.340009 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xc7n9"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.340463 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerName="dnsmasq-dns" containerID="cri-o://a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300" gracePeriod=10 Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.354976 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovs-rundir\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.355399 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6kjd\" (UniqueName: \"kubernetes.io/projected/f7359aec-58b3-4254-8765-cdc131e5f912-kube-api-access-d6kjd\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.355446 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovn-rundir\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.356076 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7359aec-58b3-4254-8765-cdc131e5f912-config\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.356144 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-combined-ca-bundle\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.356181 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.387672 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-vnmsg"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.396268 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.408857 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.410049 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vnmsg"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.425279 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460288 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460370 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-dns-svc\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460394 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-scripts\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460425 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460444 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-config\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460466 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovs-rundir\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460503 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6kjd\" (UniqueName: \"kubernetes.io/projected/f7359aec-58b3-4254-8765-cdc131e5f912-kube-api-access-d6kjd\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460523 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460546 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q9hh\" (UniqueName: \"kubernetes.io/projected/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-kube-api-access-9q9hh\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460572 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460591 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovn-rundir\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460619 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7359aec-58b3-4254-8765-cdc131e5f912-config\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460643 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jjlz\" (UniqueName: \"kubernetes.io/projected/edcdd787-6628-49ee-abcf-0146c096f547-kube-api-access-6jjlz\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460672 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcdd787-6628-49ee-abcf-0146c096f547-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460690 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460708 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-combined-ca-bundle\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460730 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460754 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-config\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.461253 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovs-rundir\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.463772 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovn-rundir\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.464401 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7359aec-58b3-4254-8765-cdc131e5f912-config\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.465740 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-combined-ca-bundle\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.468690 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.487964 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6kjd\" (UniqueName: \"kubernetes.io/projected/f7359aec-58b3-4254-8765-cdc131e5f912-kube-api-access-d6kjd\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.547702 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562620 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcdd787-6628-49ee-abcf-0146c096f547-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562673 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562697 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562722 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-config\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562782 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-dns-svc\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562802 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-scripts\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562831 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562848 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-config\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562889 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562929 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q9hh\" (UniqueName: \"kubernetes.io/projected/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-kube-api-access-9q9hh\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562952 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562991 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jjlz\" (UniqueName: \"kubernetes.io/projected/edcdd787-6628-49ee-abcf-0146c096f547-kube-api-access-6jjlz\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.563577 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcdd787-6628-49ee-abcf-0146c096f547-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.564318 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.565102 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-config\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.565656 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.565860 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-scripts\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.567631 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-dns-svc\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.567651 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-config\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.569755 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.571453 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.574634 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.589785 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q9hh\" (UniqueName: \"kubernetes.io/projected/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-kube-api-access-9q9hh\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.590772 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jjlz\" (UniqueName: \"kubernetes.io/projected/edcdd787-6628-49ee-abcf-0146c096f547-kube-api-access-6jjlz\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.651355 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.775112 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.776864 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" event={"ID":"303230dd-ae75-4c0f-abb8-be1086a098c5","Type":"ContainerStarted","Data":"7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024"} Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.777057 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerName="dnsmasq-dns" containerID="cri-o://7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024" gracePeriod=10 Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.777365 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.803482 4804 generic.go:334] "Generic (PLEG): container finished" podID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerID="a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300" exitCode=0 Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.804584 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.804722 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" event={"ID":"2bf63c78-fb1d-4777-9643-0923cf3a4c57","Type":"ContainerDied","Data":"a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300"} Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.804756 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" event={"ID":"2bf63c78-fb1d-4777-9643-0923cf3a4c57","Type":"ContainerDied","Data":"d7d8077111dc71deae67122e06d45da608d04892783ac52603e5acfd01f98f37"} Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.804774 4804 scope.go:117] "RemoveContainer" containerID="a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.836349 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.852739 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" podStartSLOduration=-9223371993.002058 podStartE2EDuration="43.852717545s" podCreationTimestamp="2026-01-28 11:39:43 +0000 UTC" firstStartedPulling="2026-01-28 11:39:44.206185014 +0000 UTC m=+1060.001064998" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:26.846074634 +0000 UTC m=+1102.640954608" watchObservedRunningTime="2026-01-28 11:40:26.852717545 +0000 UTC m=+1102.647597529" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.873148 4804 scope.go:117] "RemoveContainer" containerID="947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.891049 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gtg97"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.901174 4804 scope.go:117] "RemoveContainer" containerID="a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300" Jan 28 11:40:26 crc kubenswrapper[4804]: E0128 11:40:26.901689 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300\": container with ID starting with a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300 not found: ID does not exist" containerID="a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.901722 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300"} err="failed to get container status \"a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300\": rpc error: code = NotFound desc = could not find container \"a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300\": container with ID starting with a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300 not found: ID does not exist" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.901748 4804 scope.go:117] "RemoveContainer" containerID="947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca" Jan 28 11:40:26 crc kubenswrapper[4804]: E0128 11:40:26.902158 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca\": container with ID starting with 947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca not found: ID does not exist" containerID="947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.902180 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca"} err="failed to get container status \"947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca\": rpc error: code = NotFound desc = could not find container \"947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca\": container with ID starting with 947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca not found: ID does not exist" Jan 28 11:40:26 crc kubenswrapper[4804]: W0128 11:40:26.911707 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7359aec_58b3_4254_8765_cdc131e5f912.slice/crio-79be63495c588808e23a67b45c37537f9b6477c73ecd4b8dd566e47b4bed3b9d WatchSource:0}: Error finding container 79be63495c588808e23a67b45c37537f9b6477c73ecd4b8dd566e47b4bed3b9d: Status 404 returned error can't find the container with id 79be63495c588808e23a67b45c37537f9b6477c73ecd4b8dd566e47b4bed3b9d Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.973850 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdvcs\" (UniqueName: \"kubernetes.io/projected/2bf63c78-fb1d-4777-9643-0923cf3a4c57-kube-api-access-fdvcs\") pod \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.973950 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-config\") pod \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.974051 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-dns-svc\") pod \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.987534 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bf63c78-fb1d-4777-9643-0923cf3a4c57-kube-api-access-fdvcs" (OuterVolumeSpecName: "kube-api-access-fdvcs") pod "2bf63c78-fb1d-4777-9643-0923cf3a4c57" (UID: "2bf63c78-fb1d-4777-9643-0923cf3a4c57"). InnerVolumeSpecName "kube-api-access-fdvcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.991836 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p7wz6"] Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.027514 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-config" (OuterVolumeSpecName: "config") pod "2bf63c78-fb1d-4777-9643-0923cf3a4c57" (UID: "2bf63c78-fb1d-4777-9643-0923cf3a4c57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.037475 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2bf63c78-fb1d-4777-9643-0923cf3a4c57" (UID: "2bf63c78-fb1d-4777-9643-0923cf3a4c57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.076716 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.076762 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.076779 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdvcs\" (UniqueName: \"kubernetes.io/projected/2bf63c78-fb1d-4777-9643-0923cf3a4c57-kube-api-access-fdvcs\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.157424 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xc7n9"] Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.157779 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.157793 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.164780 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xc7n9"] Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.197770 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.269798 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.318835 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vnmsg"] Jan 28 11:40:27 crc kubenswrapper[4804]: W0128 11:40:27.348059 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda31c7f4f_6e39_4542_b3f8_d5bfdcc0831c.slice/crio-1020242618b1dde6f3aaf71e5aba360d809d5638f6a17f243b95e504e340485b WatchSource:0}: Error finding container 1020242618b1dde6f3aaf71e5aba360d809d5638f6a17f243b95e504e340485b: Status 404 returned error can't find the container with id 1020242618b1dde6f3aaf71e5aba360d809d5638f6a17f243b95e504e340485b Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.383039 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t9pp\" (UniqueName: \"kubernetes.io/projected/303230dd-ae75-4c0f-abb8-be1086a098c5-kube-api-access-8t9pp\") pod \"303230dd-ae75-4c0f-abb8-be1086a098c5\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.383088 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-config\") pod \"303230dd-ae75-4c0f-abb8-be1086a098c5\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.383395 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-dns-svc\") pod \"303230dd-ae75-4c0f-abb8-be1086a098c5\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.388987 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303230dd-ae75-4c0f-abb8-be1086a098c5-kube-api-access-8t9pp" (OuterVolumeSpecName: "kube-api-access-8t9pp") pod "303230dd-ae75-4c0f-abb8-be1086a098c5" (UID: "303230dd-ae75-4c0f-abb8-be1086a098c5"). InnerVolumeSpecName "kube-api-access-8t9pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.420329 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-config" (OuterVolumeSpecName: "config") pod "303230dd-ae75-4c0f-abb8-be1086a098c5" (UID: "303230dd-ae75-4c0f-abb8-be1086a098c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.421011 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "303230dd-ae75-4c0f-abb8-be1086a098c5" (UID: "303230dd-ae75-4c0f-abb8-be1086a098c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.485780 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.486375 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t9pp\" (UniqueName: \"kubernetes.io/projected/303230dd-ae75-4c0f-abb8-be1086a098c5-kube-api-access-8t9pp\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.486388 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.813070 4804 generic.go:334] "Generic (PLEG): container finished" podID="2a94ea74-636e-4cb7-803b-01e91be31160" containerID="c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c" exitCode=0 Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.813159 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" event={"ID":"2a94ea74-636e-4cb7-803b-01e91be31160","Type":"ContainerDied","Data":"c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.813195 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" event={"ID":"2a94ea74-636e-4cb7-803b-01e91be31160","Type":"ContainerStarted","Data":"a7461c4eba1d22105afb8f1414a73b5899821b91e52e8b7869ad478250c3c188"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.815490 4804 generic.go:334] "Generic (PLEG): container finished" podID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerID="a99548645bbd8f2136f9f7fb1affc4d254741865c846f3d3f9116fc59fc1d178" exitCode=0 Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.815575 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vnmsg" event={"ID":"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c","Type":"ContainerDied","Data":"a99548645bbd8f2136f9f7fb1affc4d254741865c846f3d3f9116fc59fc1d178"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.815642 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vnmsg" event={"ID":"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c","Type":"ContainerStarted","Data":"1020242618b1dde6f3aaf71e5aba360d809d5638f6a17f243b95e504e340485b"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.819687 4804 generic.go:334] "Generic (PLEG): container finished" podID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerID="7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024" exitCode=0 Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.819774 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" event={"ID":"303230dd-ae75-4c0f-abb8-be1086a098c5","Type":"ContainerDied","Data":"7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.819778 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.819824 4804 scope.go:117] "RemoveContainer" containerID="7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.819807 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" event={"ID":"303230dd-ae75-4c0f-abb8-be1086a098c5","Type":"ContainerDied","Data":"fb24c3a897ceddd1a2c22ed7950667aa2df40c1a865bdacebfbaa2864376b059"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.821711 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gtg97" event={"ID":"f7359aec-58b3-4254-8765-cdc131e5f912","Type":"ContainerStarted","Data":"565156fe636372aa88e628b080b158f58c0e89d805ea93ee8a1f9e78b61b800b"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.821887 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gtg97" event={"ID":"f7359aec-58b3-4254-8765-cdc131e5f912","Type":"ContainerStarted","Data":"79be63495c588808e23a67b45c37537f9b6477c73ecd4b8dd566e47b4bed3b9d"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.827541 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcdd787-6628-49ee-abcf-0146c096f547","Type":"ContainerStarted","Data":"1c34e1e54f29019381489766526d85a7ed81f51d7a176f0cfb6db1161fa7dad8"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.866943 4804 scope.go:117] "RemoveContainer" containerID="9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.899749 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-gtg97" podStartSLOduration=1.899724019 podStartE2EDuration="1.899724019s" podCreationTimestamp="2026-01-28 11:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:27.887452969 +0000 UTC m=+1103.682332953" watchObservedRunningTime="2026-01-28 11:40:27.899724019 +0000 UTC m=+1103.694604013" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.921400 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6pb25"] Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.927920 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6pb25"] Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.932657 4804 scope.go:117] "RemoveContainer" containerID="7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024" Jan 28 11:40:27 crc kubenswrapper[4804]: E0128 11:40:27.933090 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024\": container with ID starting with 7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024 not found: ID does not exist" containerID="7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.933194 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024"} err="failed to get container status \"7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024\": rpc error: code = NotFound desc = could not find container \"7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024\": container with ID starting with 7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024 not found: ID does not exist" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.933222 4804 scope.go:117] "RemoveContainer" containerID="9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084" Jan 28 11:40:27 crc kubenswrapper[4804]: E0128 11:40:27.935248 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084\": container with ID starting with 9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084 not found: ID does not exist" containerID="9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.935296 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084"} err="failed to get container status \"9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084\": rpc error: code = NotFound desc = could not find container \"9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084\": container with ID starting with 9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084 not found: ID does not exist" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.467423 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.467765 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.829867 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.835956 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcdd787-6628-49ee-abcf-0146c096f547","Type":"ContainerStarted","Data":"1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00"} Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.837405 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" event={"ID":"2a94ea74-636e-4cb7-803b-01e91be31160","Type":"ContainerStarted","Data":"8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf"} Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.838150 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.840104 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vnmsg" event={"ID":"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c","Type":"ContainerStarted","Data":"5418c7b289f056b4f05b6a342643efaf91956352be4b6ee33c7e9e02353ffd7b"} Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.840652 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.870363 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-vnmsg" podStartSLOduration=2.870343061 podStartE2EDuration="2.870343061s" podCreationTimestamp="2026-01-28 11:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:28.865815937 +0000 UTC m=+1104.660695931" watchObservedRunningTime="2026-01-28 11:40:28.870343061 +0000 UTC m=+1104.665223055" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.886711 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" podStartSLOduration=2.886689062 podStartE2EDuration="2.886689062s" podCreationTimestamp="2026-01-28 11:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:28.880089642 +0000 UTC m=+1104.674969626" watchObservedRunningTime="2026-01-28 11:40:28.886689062 +0000 UTC m=+1104.681569066" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.926770 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" path="/var/lib/kubelet/pods/2bf63c78-fb1d-4777-9643-0923cf3a4c57/volumes" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.927523 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" path="/var/lib/kubelet/pods/303230dd-ae75-4c0f-abb8-be1086a098c5/volumes" Jan 28 11:40:29 crc kubenswrapper[4804]: I0128 11:40:29.540019 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 11:40:29 crc kubenswrapper[4804]: I0128 11:40:29.633478 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 11:40:29 crc kubenswrapper[4804]: I0128 11:40:29.850796 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcdd787-6628-49ee-abcf-0146c096f547","Type":"ContainerStarted","Data":"17400e5f10254b0d771acc135458ad1381f04acdf3cc5817d31b6d3932b519f1"} Jan 28 11:40:29 crc kubenswrapper[4804]: I0128 11:40:29.889998 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.53223908 podStartE2EDuration="3.889983564s" podCreationTimestamp="2026-01-28 11:40:26 +0000 UTC" firstStartedPulling="2026-01-28 11:40:27.209922974 +0000 UTC m=+1103.004802958" lastFinishedPulling="2026-01-28 11:40:28.567667458 +0000 UTC m=+1104.362547442" observedRunningTime="2026-01-28 11:40:29.888006542 +0000 UTC m=+1105.682886516" watchObservedRunningTime="2026-01-28 11:40:29.889983564 +0000 UTC m=+1105.684863548" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.584104 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p7wz6"] Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626090 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-smhkb"] Jan 28 11:40:30 crc kubenswrapper[4804]: E0128 11:40:30.626388 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerName="dnsmasq-dns" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626403 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerName="dnsmasq-dns" Jan 28 11:40:30 crc kubenswrapper[4804]: E0128 11:40:30.626430 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerName="init" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626436 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerName="init" Jan 28 11:40:30 crc kubenswrapper[4804]: E0128 11:40:30.626454 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerName="dnsmasq-dns" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626461 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerName="dnsmasq-dns" Jan 28 11:40:30 crc kubenswrapper[4804]: E0128 11:40:30.626474 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerName="init" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626479 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerName="init" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626663 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerName="dnsmasq-dns" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626682 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerName="dnsmasq-dns" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.627446 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.644542 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-smhkb"] Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.655770 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.751944 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.752002 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.752053 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-config\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.752120 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.752302 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzbk7\" (UniqueName: \"kubernetes.io/projected/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-kube-api-access-xzbk7\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.832493 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.853683 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-config\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.853783 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.853846 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzbk7\" (UniqueName: \"kubernetes.io/projected/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-kube-api-access-xzbk7\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.853938 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.853965 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.855057 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.856297 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-config\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.856819 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.856956 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.860543 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.890839 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzbk7\" (UniqueName: \"kubernetes.io/projected/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-kube-api-access-xzbk7\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.938737 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.945609 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.461326 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-smhkb"] Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.705397 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.714149 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.717630 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.721205 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.721244 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.721286 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-z6brz" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.732091 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.772777 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-lock\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.772829 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2q8t\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-kube-api-access-t2q8t\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.773014 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.773072 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-cache\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.773158 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.773373 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.871302 4804 generic.go:334] "Generic (PLEG): container finished" podID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerID="cc41ce863945bdc29f63769a99ae0d6dadc7d7ef12a25abcef8a64fe330fdd73" exitCode=0 Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.871412 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" event={"ID":"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1","Type":"ContainerDied","Data":"cc41ce863945bdc29f63769a99ae0d6dadc7d7ef12a25abcef8a64fe330fdd73"} Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.871461 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" event={"ID":"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1","Type":"ContainerStarted","Data":"99851c0d89d123f60d87fe5e7b4fa11b90a206a967c2a2ccd24c03d723ee66ce"} Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.871928 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" containerName="dnsmasq-dns" containerID="cri-o://8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf" gracePeriod=10 Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.875863 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.876011 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-lock\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.876085 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2q8t\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-kube-api-access-t2q8t\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.876213 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.876290 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-cache\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.876352 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.876512 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.877848 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-cache\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.878130 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-lock\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: E0128 11:40:31.880449 4804 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 11:40:31 crc kubenswrapper[4804]: E0128 11:40:31.880480 4804 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 11:40:31 crc kubenswrapper[4804]: E0128 11:40:31.880528 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift podName:f452e749-06e2-4b9c-a4d7-8a63ccd07cfc nodeName:}" failed. No retries permitted until 2026-01-28 11:40:32.380511778 +0000 UTC m=+1108.175391772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift") pod "swift-storage-0" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc") : configmap "swift-ring-files" not found Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.882813 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.913600 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2q8t\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-kube-api-access-t2q8t\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.915554 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.227143 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jxgc9"] Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.229175 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.235137 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.236211 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.249353 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jxgc9"] Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.272011 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294133 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-dispersionconf\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294176 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmwrr\" (UniqueName: \"kubernetes.io/projected/cb46a04b-0e73-46fb-bcdf-a670c30d5531-kube-api-access-gmwrr\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294322 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-combined-ca-bundle\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294388 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-scripts\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294552 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-swiftconf\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294702 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb46a04b-0e73-46fb-bcdf-a670c30d5531-etc-swift\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294840 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-ring-data-devices\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.324763 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.402240 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5l6b\" (UniqueName: \"kubernetes.io/projected/2a94ea74-636e-4cb7-803b-01e91be31160-kube-api-access-z5l6b\") pod \"2a94ea74-636e-4cb7-803b-01e91be31160\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.402621 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-ovsdbserver-nb\") pod \"2a94ea74-636e-4cb7-803b-01e91be31160\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.402783 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-dns-svc\") pod \"2a94ea74-636e-4cb7-803b-01e91be31160\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.403210 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-config\") pod \"2a94ea74-636e-4cb7-803b-01e91be31160\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.406273 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-combined-ca-bundle\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.406352 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-scripts\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.406541 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-swiftconf\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.406636 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb46a04b-0e73-46fb-bcdf-a670c30d5531-etc-swift\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.406716 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-ring-data-devices\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.407613 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-ring-data-devices\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.407848 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-dispersionconf\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.407888 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmwrr\" (UniqueName: \"kubernetes.io/projected/cb46a04b-0e73-46fb-bcdf-a670c30d5531-kube-api-access-gmwrr\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.408411 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb46a04b-0e73-46fb-bcdf-a670c30d5531-etc-swift\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.408562 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a94ea74-636e-4cb7-803b-01e91be31160-kube-api-access-z5l6b" (OuterVolumeSpecName: "kube-api-access-z5l6b") pod "2a94ea74-636e-4cb7-803b-01e91be31160" (UID: "2a94ea74-636e-4cb7-803b-01e91be31160"). InnerVolumeSpecName "kube-api-access-z5l6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.408869 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-scripts\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.410982 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:32 crc kubenswrapper[4804]: E0128 11:40:32.412229 4804 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 11:40:32 crc kubenswrapper[4804]: E0128 11:40:32.412260 4804 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 11:40:32 crc kubenswrapper[4804]: E0128 11:40:32.412341 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift podName:f452e749-06e2-4b9c-a4d7-8a63ccd07cfc nodeName:}" failed. No retries permitted until 2026-01-28 11:40:33.412317545 +0000 UTC m=+1109.207197609 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift") pod "swift-storage-0" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc") : configmap "swift-ring-files" not found Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.413253 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-dispersionconf\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.414432 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-swiftconf\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.420670 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-combined-ca-bundle\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.436819 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmwrr\" (UniqueName: \"kubernetes.io/projected/cb46a04b-0e73-46fb-bcdf-a670c30d5531-kube-api-access-gmwrr\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.454657 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a94ea74-636e-4cb7-803b-01e91be31160" (UID: "2a94ea74-636e-4cb7-803b-01e91be31160"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.464760 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2a94ea74-636e-4cb7-803b-01e91be31160" (UID: "2a94ea74-636e-4cb7-803b-01e91be31160"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.477153 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-config" (OuterVolumeSpecName: "config") pod "2a94ea74-636e-4cb7-803b-01e91be31160" (UID: "2a94ea74-636e-4cb7-803b-01e91be31160"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.512550 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.512901 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.512987 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.513171 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5l6b\" (UniqueName: \"kubernetes.io/projected/2a94ea74-636e-4cb7-803b-01e91be31160-kube-api-access-z5l6b\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.556259 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.883041 4804 generic.go:334] "Generic (PLEG): container finished" podID="2a94ea74-636e-4cb7-803b-01e91be31160" containerID="8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf" exitCode=0 Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.883199 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.883183 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" event={"ID":"2a94ea74-636e-4cb7-803b-01e91be31160","Type":"ContainerDied","Data":"8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf"} Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.886893 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" event={"ID":"2a94ea74-636e-4cb7-803b-01e91be31160","Type":"ContainerDied","Data":"a7461c4eba1d22105afb8f1414a73b5899821b91e52e8b7869ad478250c3c188"} Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.886916 4804 scope.go:117] "RemoveContainer" containerID="8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.893600 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" event={"ID":"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1","Type":"ContainerStarted","Data":"a48f67c8adf2aa181768d9a9401b24e93ffd4b8affd530951dafed718efcc454"} Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.893896 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.923489 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" podStartSLOduration=2.923468703 podStartE2EDuration="2.923468703s" podCreationTimestamp="2026-01-28 11:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:32.919790306 +0000 UTC m=+1108.714670290" watchObservedRunningTime="2026-01-28 11:40:32.923468703 +0000 UTC m=+1108.718348687" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.957109 4804 scope.go:117] "RemoveContainer" containerID="c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.968692 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p7wz6"] Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.981791 4804 scope.go:117] "RemoveContainer" containerID="8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf" Jan 28 11:40:32 crc kubenswrapper[4804]: E0128 11:40:32.982386 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf\": container with ID starting with 8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf not found: ID does not exist" containerID="8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.982423 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf"} err="failed to get container status \"8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf\": rpc error: code = NotFound desc = could not find container \"8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf\": container with ID starting with 8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf not found: ID does not exist" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.982449 4804 scope.go:117] "RemoveContainer" containerID="c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c" Jan 28 11:40:32 crc kubenswrapper[4804]: E0128 11:40:32.982760 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c\": container with ID starting with c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c not found: ID does not exist" containerID="c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.982786 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c"} err="failed to get container status \"c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c\": rpc error: code = NotFound desc = could not find container \"c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c\": container with ID starting with c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c not found: ID does not exist" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.985389 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p7wz6"] Jan 28 11:40:33 crc kubenswrapper[4804]: W0128 11:40:33.063562 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb46a04b_0e73_46fb_bcdf_a670c30d5531.slice/crio-54320a0848015693cbe26d3d7b9ea2f77d1e8b3b63d64b6c373c37c52c3bf565 WatchSource:0}: Error finding container 54320a0848015693cbe26d3d7b9ea2f77d1e8b3b63d64b6c373c37c52c3bf565: Status 404 returned error can't find the container with id 54320a0848015693cbe26d3d7b9ea2f77d1e8b3b63d64b6c373c37c52c3bf565 Jan 28 11:40:33 crc kubenswrapper[4804]: I0128 11:40:33.064858 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jxgc9"] Jan 28 11:40:33 crc kubenswrapper[4804]: I0128 11:40:33.430361 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:33 crc kubenswrapper[4804]: E0128 11:40:33.430636 4804 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 11:40:33 crc kubenswrapper[4804]: E0128 11:40:33.430675 4804 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 11:40:33 crc kubenswrapper[4804]: E0128 11:40:33.430750 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift podName:f452e749-06e2-4b9c-a4d7-8a63ccd07cfc nodeName:}" failed. No retries permitted until 2026-01-28 11:40:35.430725149 +0000 UTC m=+1111.225605133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift") pod "swift-storage-0" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc") : configmap "swift-ring-files" not found Jan 28 11:40:33 crc kubenswrapper[4804]: I0128 11:40:33.902616 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jxgc9" event={"ID":"cb46a04b-0e73-46fb-bcdf-a670c30d5531","Type":"ContainerStarted","Data":"54320a0848015693cbe26d3d7b9ea2f77d1e8b3b63d64b6c373c37c52c3bf565"} Jan 28 11:40:34 crc kubenswrapper[4804]: I0128 11:40:34.927568 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" path="/var/lib/kubelet/pods/2a94ea74-636e-4cb7-803b-01e91be31160/volumes" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.469751 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:35 crc kubenswrapper[4804]: E0128 11:40:35.470087 4804 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 11:40:35 crc kubenswrapper[4804]: E0128 11:40:35.470115 4804 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 11:40:35 crc kubenswrapper[4804]: E0128 11:40:35.470229 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift podName:f452e749-06e2-4b9c-a4d7-8a63ccd07cfc nodeName:}" failed. No retries permitted until 2026-01-28 11:40:39.47018271 +0000 UTC m=+1115.265062694 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift") pod "swift-storage-0" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc") : configmap "swift-ring-files" not found Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.850852 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-64l8r"] Jan 28 11:40:35 crc kubenswrapper[4804]: E0128 11:40:35.851340 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" containerName="dnsmasq-dns" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.851367 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" containerName="dnsmasq-dns" Jan 28 11:40:35 crc kubenswrapper[4804]: E0128 11:40:35.851407 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" containerName="init" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.851416 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" containerName="init" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.851673 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" containerName="dnsmasq-dns" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.852562 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.854827 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.863297 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-64l8r"] Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.881045 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxf54\" (UniqueName: \"kubernetes.io/projected/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-kube-api-access-mxf54\") pod \"root-account-create-update-64l8r\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.881136 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-operator-scripts\") pod \"root-account-create-update-64l8r\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.983652 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxf54\" (UniqueName: \"kubernetes.io/projected/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-kube-api-access-mxf54\") pod \"root-account-create-update-64l8r\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.983732 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-operator-scripts\") pod \"root-account-create-update-64l8r\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.984670 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-operator-scripts\") pod \"root-account-create-update-64l8r\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:36 crc kubenswrapper[4804]: I0128 11:40:36.012542 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxf54\" (UniqueName: \"kubernetes.io/projected/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-kube-api-access-mxf54\") pod \"root-account-create-update-64l8r\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:36 crc kubenswrapper[4804]: I0128 11:40:36.182898 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:36 crc kubenswrapper[4804]: I0128 11:40:36.841112 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:37 crc kubenswrapper[4804]: I0128 11:40:37.061205 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-64l8r"] Jan 28 11:40:37 crc kubenswrapper[4804]: I0128 11:40:37.944602 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jxgc9" event={"ID":"cb46a04b-0e73-46fb-bcdf-a670c30d5531","Type":"ContainerStarted","Data":"acc629a29baa94b90886caa052a9712308190fcbd858f031b8ca85b990fe85e5"} Jan 28 11:40:37 crc kubenswrapper[4804]: I0128 11:40:37.946980 4804 generic.go:334] "Generic (PLEG): container finished" podID="b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" containerID="647b1f190be0e34804a1719e55a8c2587f822eeb47af8070a4c99ed681d8f789" exitCode=0 Jan 28 11:40:37 crc kubenswrapper[4804]: I0128 11:40:37.947015 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-64l8r" event={"ID":"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1","Type":"ContainerDied","Data":"647b1f190be0e34804a1719e55a8c2587f822eeb47af8070a4c99ed681d8f789"} Jan 28 11:40:37 crc kubenswrapper[4804]: I0128 11:40:37.947034 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-64l8r" event={"ID":"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1","Type":"ContainerStarted","Data":"14ca1244796137d0c6b3dfdc5bf8667213bd1467f526fa625705496eede10232"} Jan 28 11:40:37 crc kubenswrapper[4804]: I0128 11:40:37.978320 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jxgc9" podStartSLOduration=2.214823764 podStartE2EDuration="5.978269837s" podCreationTimestamp="2026-01-28 11:40:32 +0000 UTC" firstStartedPulling="2026-01-28 11:40:33.065620278 +0000 UTC m=+1108.860500262" lastFinishedPulling="2026-01-28 11:40:36.829066351 +0000 UTC m=+1112.623946335" observedRunningTime="2026-01-28 11:40:37.966587645 +0000 UTC m=+1113.761467629" watchObservedRunningTime="2026-01-28 11:40:37.978269837 +0000 UTC m=+1113.773149841" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.389525 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5t7jn"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.390940 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.397145 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5t7jn"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.441148 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa6273-e08e-4dbb-a86b-a8951e4100fa-operator-scripts\") pod \"keystone-db-create-5t7jn\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.441216 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhpch\" (UniqueName: \"kubernetes.io/projected/54fa6273-e08e-4dbb-a86b-a8951e4100fa-kube-api-access-mhpch\") pod \"keystone-db-create-5t7jn\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.542772 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa6273-e08e-4dbb-a86b-a8951e4100fa-operator-scripts\") pod \"keystone-db-create-5t7jn\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.542825 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhpch\" (UniqueName: \"kubernetes.io/projected/54fa6273-e08e-4dbb-a86b-a8951e4100fa-kube-api-access-mhpch\") pod \"keystone-db-create-5t7jn\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.543936 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa6273-e08e-4dbb-a86b-a8951e4100fa-operator-scripts\") pod \"keystone-db-create-5t7jn\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.553396 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f8f4-account-create-update-mg2gd"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.555534 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.559507 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.593617 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhpch\" (UniqueName: \"kubernetes.io/projected/54fa6273-e08e-4dbb-a86b-a8951e4100fa-kube-api-access-mhpch\") pod \"keystone-db-create-5t7jn\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.597381 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f8f4-account-create-update-mg2gd"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.644690 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4586997-59ed-4e13-b7ec-3146711f642c-operator-scripts\") pod \"keystone-f8f4-account-create-update-mg2gd\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.644742 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26qf\" (UniqueName: \"kubernetes.io/projected/a4586997-59ed-4e13-b7ec-3146711f642c-kube-api-access-x26qf\") pod \"keystone-f8f4-account-create-update-mg2gd\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.712383 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zvgmg"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.714010 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.715776 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.726343 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zvgmg"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.746243 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1029fc-e131-4d00-b538-6f0a17674c75-operator-scripts\") pod \"placement-db-create-zvgmg\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.746402 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g97lw\" (UniqueName: \"kubernetes.io/projected/8b1029fc-e131-4d00-b538-6f0a17674c75-kube-api-access-g97lw\") pod \"placement-db-create-zvgmg\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.746440 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4586997-59ed-4e13-b7ec-3146711f642c-operator-scripts\") pod \"keystone-f8f4-account-create-update-mg2gd\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.747417 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x26qf\" (UniqueName: \"kubernetes.io/projected/a4586997-59ed-4e13-b7ec-3146711f642c-kube-api-access-x26qf\") pod \"keystone-f8f4-account-create-update-mg2gd\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.747342 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4586997-59ed-4e13-b7ec-3146711f642c-operator-scripts\") pod \"keystone-f8f4-account-create-update-mg2gd\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.769170 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x26qf\" (UniqueName: \"kubernetes.io/projected/a4586997-59ed-4e13-b7ec-3146711f642c-kube-api-access-x26qf\") pod \"keystone-f8f4-account-create-update-mg2gd\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.826765 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ea29-account-create-update-fd9sb"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.829952 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.834762 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.848993 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1029fc-e131-4d00-b538-6f0a17674c75-operator-scripts\") pod \"placement-db-create-zvgmg\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.849148 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g97lw\" (UniqueName: \"kubernetes.io/projected/8b1029fc-e131-4d00-b538-6f0a17674c75-kube-api-access-g97lw\") pod \"placement-db-create-zvgmg\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.852239 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1029fc-e131-4d00-b538-6f0a17674c75-operator-scripts\") pod \"placement-db-create-zvgmg\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.867083 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ea29-account-create-update-fd9sb"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.877545 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g97lw\" (UniqueName: \"kubernetes.io/projected/8b1029fc-e131-4d00-b538-6f0a17674c75-kube-api-access-g97lw\") pod \"placement-db-create-zvgmg\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.926662 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.951097 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k2hk\" (UniqueName: \"kubernetes.io/projected/08795da4-549f-437a-9113-51d1003b5668-kube-api-access-7k2hk\") pod \"placement-ea29-account-create-update-fd9sb\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.951425 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08795da4-549f-437a-9113-51d1003b5668-operator-scripts\") pod \"placement-ea29-account-create-update-fd9sb\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.996070 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vmdbt"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.997360 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.016185 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vmdbt"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.033369 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.060173 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297x5\" (UniqueName: \"kubernetes.io/projected/903b6b99-b94d-428a-9c9c-7465ef27ad40-kube-api-access-297x5\") pod \"glance-db-create-vmdbt\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.060246 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08795da4-549f-437a-9113-51d1003b5668-operator-scripts\") pod \"placement-ea29-account-create-update-fd9sb\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.060374 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k2hk\" (UniqueName: \"kubernetes.io/projected/08795da4-549f-437a-9113-51d1003b5668-kube-api-access-7k2hk\") pod \"placement-ea29-account-create-update-fd9sb\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.060409 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903b6b99-b94d-428a-9c9c-7465ef27ad40-operator-scripts\") pod \"glance-db-create-vmdbt\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.061473 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08795da4-549f-437a-9113-51d1003b5668-operator-scripts\") pod \"placement-ea29-account-create-update-fd9sb\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.085639 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k2hk\" (UniqueName: \"kubernetes.io/projected/08795da4-549f-437a-9113-51d1003b5668-kube-api-access-7k2hk\") pod \"placement-ea29-account-create-update-fd9sb\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.118943 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ec8f-account-create-update-wm9f2"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.120335 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.123183 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.130570 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ec8f-account-create-update-wm9f2"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.149478 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.161632 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38148c07-9662-4f0b-8285-a02633a7cd37-operator-scripts\") pod \"glance-ec8f-account-create-update-wm9f2\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.161749 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903b6b99-b94d-428a-9c9c-7465ef27ad40-operator-scripts\") pod \"glance-db-create-vmdbt\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.161786 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnz5l\" (UniqueName: \"kubernetes.io/projected/38148c07-9662-4f0b-8285-a02633a7cd37-kube-api-access-rnz5l\") pod \"glance-ec8f-account-create-update-wm9f2\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.161871 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297x5\" (UniqueName: \"kubernetes.io/projected/903b6b99-b94d-428a-9c9c-7465ef27ad40-kube-api-access-297x5\") pod \"glance-db-create-vmdbt\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.162919 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903b6b99-b94d-428a-9c9c-7465ef27ad40-operator-scripts\") pod \"glance-db-create-vmdbt\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.181523 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297x5\" (UniqueName: \"kubernetes.io/projected/903b6b99-b94d-428a-9c9c-7465ef27ad40-kube-api-access-297x5\") pod \"glance-db-create-vmdbt\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.264252 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnz5l\" (UniqueName: \"kubernetes.io/projected/38148c07-9662-4f0b-8285-a02633a7cd37-kube-api-access-rnz5l\") pod \"glance-ec8f-account-create-update-wm9f2\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.264399 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38148c07-9662-4f0b-8285-a02633a7cd37-operator-scripts\") pod \"glance-ec8f-account-create-update-wm9f2\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.266385 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5t7jn"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.267095 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38148c07-9662-4f0b-8285-a02633a7cd37-operator-scripts\") pod \"glance-ec8f-account-create-update-wm9f2\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.287034 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnz5l\" (UniqueName: \"kubernetes.io/projected/38148c07-9662-4f0b-8285-a02633a7cd37-kube-api-access-rnz5l\") pod \"glance-ec8f-account-create-update-wm9f2\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.321195 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.432912 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.449902 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.468461 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxf54\" (UniqueName: \"kubernetes.io/projected/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-kube-api-access-mxf54\") pod \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.468524 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-operator-scripts\") pod \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.469330 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" (UID: "b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.474013 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-kube-api-access-mxf54" (OuterVolumeSpecName: "kube-api-access-mxf54") pod "b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" (UID: "b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1"). InnerVolumeSpecName "kube-api-access-mxf54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.571030 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.571119 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxf54\" (UniqueName: \"kubernetes.io/projected/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-kube-api-access-mxf54\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.571164 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:39 crc kubenswrapper[4804]: E0128 11:40:39.571281 4804 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 11:40:39 crc kubenswrapper[4804]: E0128 11:40:39.571299 4804 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 11:40:39 crc kubenswrapper[4804]: E0128 11:40:39.571351 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift podName:f452e749-06e2-4b9c-a4d7-8a63ccd07cfc nodeName:}" failed. No retries permitted until 2026-01-28 11:40:47.57133103 +0000 UTC m=+1123.366211014 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift") pod "swift-storage-0" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc") : configmap "swift-ring-files" not found Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.578440 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f8f4-account-create-update-mg2gd"] Jan 28 11:40:39 crc kubenswrapper[4804]: W0128 11:40:39.581672 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4586997_59ed_4e13_b7ec_3146711f642c.slice/crio-9b946f5044f4683cff756fe0f12a25635572e5f13216b661e8d5088a9dd3482b WatchSource:0}: Error finding container 9b946f5044f4683cff756fe0f12a25635572e5f13216b661e8d5088a9dd3482b: Status 404 returned error can't find the container with id 9b946f5044f4683cff756fe0f12a25635572e5f13216b661e8d5088a9dd3482b Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.689557 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zvgmg"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.711815 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ea29-account-create-update-fd9sb"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.821380 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vmdbt"] Jan 28 11:40:39 crc kubenswrapper[4804]: W0128 11:40:39.835604 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod903b6b99_b94d_428a_9c9c_7465ef27ad40.slice/crio-ab3f0d513ec7bb60cd7ed02802956a145c0323826eed1b2f7bb7ee645c397fe0 WatchSource:0}: Error finding container ab3f0d513ec7bb60cd7ed02802956a145c0323826eed1b2f7bb7ee645c397fe0: Status 404 returned error can't find the container with id ab3f0d513ec7bb60cd7ed02802956a145c0323826eed1b2f7bb7ee645c397fe0 Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.992009 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ec8f-account-create-update-wm9f2"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.992502 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ea29-account-create-update-fd9sb" event={"ID":"08795da4-549f-437a-9113-51d1003b5668","Type":"ContainerStarted","Data":"ddb1f30d4961cdeec5b26416a480e4c0b1a3e9e39eedab64e0edf4f1452782c2"} Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.996602 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-64l8r" event={"ID":"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1","Type":"ContainerDied","Data":"14ca1244796137d0c6b3dfdc5bf8667213bd1467f526fa625705496eede10232"} Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.996637 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.996653 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14ca1244796137d0c6b3dfdc5bf8667213bd1467f526fa625705496eede10232" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.998058 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f8f4-account-create-update-mg2gd" event={"ID":"a4586997-59ed-4e13-b7ec-3146711f642c","Type":"ContainerStarted","Data":"9b946f5044f4683cff756fe0f12a25635572e5f13216b661e8d5088a9dd3482b"} Jan 28 11:40:40 crc kubenswrapper[4804]: I0128 11:40:39.999936 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5t7jn" event={"ID":"54fa6273-e08e-4dbb-a86b-a8951e4100fa","Type":"ContainerStarted","Data":"5575fa4ddc8773670c0f493f88df21ff86a53d01b7736599cdb3fe2b123bacad"} Jan 28 11:40:40 crc kubenswrapper[4804]: I0128 11:40:40.000023 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5t7jn" event={"ID":"54fa6273-e08e-4dbb-a86b-a8951e4100fa","Type":"ContainerStarted","Data":"f3890669a2cd664aad88617cbeaf1f93a1a4048bcda428a191c8ef4e1d58137a"} Jan 28 11:40:40 crc kubenswrapper[4804]: I0128 11:40:40.002911 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vmdbt" event={"ID":"903b6b99-b94d-428a-9c9c-7465ef27ad40","Type":"ContainerStarted","Data":"ab3f0d513ec7bb60cd7ed02802956a145c0323826eed1b2f7bb7ee645c397fe0"} Jan 28 11:40:40 crc kubenswrapper[4804]: I0128 11:40:40.003837 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zvgmg" event={"ID":"8b1029fc-e131-4d00-b538-6f0a17674c75","Type":"ContainerStarted","Data":"75293dc771af25680556ee3acb3f64f045ce3898abcd88a264facd4d2213169b"} Jan 28 11:40:40 crc kubenswrapper[4804]: I0128 11:40:40.026494 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-5t7jn" podStartSLOduration=2.026470297 podStartE2EDuration="2.026470297s" podCreationTimestamp="2026-01-28 11:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:40.020642351 +0000 UTC m=+1115.815522335" watchObservedRunningTime="2026-01-28 11:40:40.026470297 +0000 UTC m=+1115.821350281" Jan 28 11:40:40 crc kubenswrapper[4804]: I0128 11:40:40.947847 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.013762 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ec8f-account-create-update-wm9f2" event={"ID":"38148c07-9662-4f0b-8285-a02633a7cd37","Type":"ContainerStarted","Data":"1cdec6eb1be633affff1b7b15a04d38540b48582466e56d387986c60aa1a5c76"} Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.017409 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vnmsg"] Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.017661 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-vnmsg" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="dnsmasq-dns" containerID="cri-o://5418c7b289f056b4f05b6a342643efaf91956352be4b6ee33c7e9e02353ffd7b" gracePeriod=10 Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.042819 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vmdbt" event={"ID":"903b6b99-b94d-428a-9c9c-7465ef27ad40","Type":"ContainerStarted","Data":"f3135f22df67a9f998ea737f7764f24294ba0c3f0ee5a1682b6d2623e608a549"} Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.044798 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zvgmg" event={"ID":"8b1029fc-e131-4d00-b538-6f0a17674c75","Type":"ContainerStarted","Data":"61f6d7d8df2b93d1c2aa1ade5c1c81fe0cb73ba040cbf0a84450d89f676d1c96"} Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.046715 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ea29-account-create-update-fd9sb" event={"ID":"08795da4-549f-437a-9113-51d1003b5668","Type":"ContainerStarted","Data":"f7789d2bdd1334c4462a3af29ff8ca19fc4d47aa63dc768208c1612ddcee666a"} Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.048386 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f8f4-account-create-update-mg2gd" event={"ID":"a4586997-59ed-4e13-b7ec-3146711f642c","Type":"ContainerStarted","Data":"07d005b2c14a47d4da694ee14fd26759eafe1775650f3812e43c2a15c848c61f"} Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.838233 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-vnmsg" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.074455 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ec8f-account-create-update-wm9f2" event={"ID":"38148c07-9662-4f0b-8285-a02633a7cd37","Type":"ContainerStarted","Data":"33b6a6135853b57c0111bf580d3d2c2cfc12a6ddcba054451c960f37e0cda40d"} Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.077657 4804 generic.go:334] "Generic (PLEG): container finished" podID="54fa6273-e08e-4dbb-a86b-a8951e4100fa" containerID="5575fa4ddc8773670c0f493f88df21ff86a53d01b7736599cdb3fe2b123bacad" exitCode=0 Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.077737 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5t7jn" event={"ID":"54fa6273-e08e-4dbb-a86b-a8951e4100fa","Type":"ContainerDied","Data":"5575fa4ddc8773670c0f493f88df21ff86a53d01b7736599cdb3fe2b123bacad"} Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.079993 4804 generic.go:334] "Generic (PLEG): container finished" podID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerID="5418c7b289f056b4f05b6a342643efaf91956352be4b6ee33c7e9e02353ffd7b" exitCode=0 Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.080086 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vnmsg" event={"ID":"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c","Type":"ContainerDied","Data":"5418c7b289f056b4f05b6a342643efaf91956352be4b6ee33c7e9e02353ffd7b"} Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.093389 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ec8f-account-create-update-wm9f2" podStartSLOduration=3.093367041 podStartE2EDuration="3.093367041s" podCreationTimestamp="2026-01-28 11:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:42.092444263 +0000 UTC m=+1117.887324257" watchObservedRunningTime="2026-01-28 11:40:42.093367041 +0000 UTC m=+1117.888247025" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.108006 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-64l8r"] Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.114812 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-64l8r"] Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.140499 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-zvgmg" podStartSLOduration=4.140481001 podStartE2EDuration="4.140481001s" podCreationTimestamp="2026-01-28 11:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:42.135813942 +0000 UTC m=+1117.930693946" watchObservedRunningTime="2026-01-28 11:40:42.140481001 +0000 UTC m=+1117.935360985" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.164553 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-vmdbt" podStartSLOduration=4.164519296 podStartE2EDuration="4.164519296s" podCreationTimestamp="2026-01-28 11:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:42.156402238 +0000 UTC m=+1117.951282222" watchObservedRunningTime="2026-01-28 11:40:42.164519296 +0000 UTC m=+1117.959399280" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.175319 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ea29-account-create-update-fd9sb" podStartSLOduration=4.175274598 podStartE2EDuration="4.175274598s" podCreationTimestamp="2026-01-28 11:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:42.168280346 +0000 UTC m=+1117.963160340" watchObservedRunningTime="2026-01-28 11:40:42.175274598 +0000 UTC m=+1117.970154582" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.211143 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f8f4-account-create-update-mg2gd" podStartSLOduration=4.211109109 podStartE2EDuration="4.211109109s" podCreationTimestamp="2026-01-28 11:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:42.183108758 +0000 UTC m=+1117.977988752" watchObservedRunningTime="2026-01-28 11:40:42.211109109 +0000 UTC m=+1118.005989093" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.582763 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.582836 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.938664 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" path="/var/lib/kubelet/pods/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1/volumes" Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.111466 4804 generic.go:334] "Generic (PLEG): container finished" podID="903b6b99-b94d-428a-9c9c-7465ef27ad40" containerID="f3135f22df67a9f998ea737f7764f24294ba0c3f0ee5a1682b6d2623e608a549" exitCode=0 Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.111662 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vmdbt" event={"ID":"903b6b99-b94d-428a-9c9c-7465ef27ad40","Type":"ContainerDied","Data":"f3135f22df67a9f998ea737f7764f24294ba0c3f0ee5a1682b6d2623e608a549"} Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.115262 4804 generic.go:334] "Generic (PLEG): container finished" podID="8b1029fc-e131-4d00-b538-6f0a17674c75" containerID="61f6d7d8df2b93d1c2aa1ade5c1c81fe0cb73ba040cbf0a84450d89f676d1c96" exitCode=0 Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.115945 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zvgmg" event={"ID":"8b1029fc-e131-4d00-b538-6f0a17674c75","Type":"ContainerDied","Data":"61f6d7d8df2b93d1c2aa1ade5c1c81fe0cb73ba040cbf0a84450d89f676d1c96"} Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.430196 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.464741 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhpch\" (UniqueName: \"kubernetes.io/projected/54fa6273-e08e-4dbb-a86b-a8951e4100fa-kube-api-access-mhpch\") pod \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.465127 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa6273-e08e-4dbb-a86b-a8951e4100fa-operator-scripts\") pod \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.465984 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54fa6273-e08e-4dbb-a86b-a8951e4100fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54fa6273-e08e-4dbb-a86b-a8951e4100fa" (UID: "54fa6273-e08e-4dbb-a86b-a8951e4100fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.485635 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54fa6273-e08e-4dbb-a86b-a8951e4100fa-kube-api-access-mhpch" (OuterVolumeSpecName: "kube-api-access-mhpch") pod "54fa6273-e08e-4dbb-a86b-a8951e4100fa" (UID: "54fa6273-e08e-4dbb-a86b-a8951e4100fa"). InnerVolumeSpecName "kube-api-access-mhpch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.567463 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhpch\" (UniqueName: \"kubernetes.io/projected/54fa6273-e08e-4dbb-a86b-a8951e4100fa-kube-api-access-mhpch\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.567504 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa6273-e08e-4dbb-a86b-a8951e4100fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.122779 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.122774 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5t7jn" event={"ID":"54fa6273-e08e-4dbb-a86b-a8951e4100fa","Type":"ContainerDied","Data":"f3890669a2cd664aad88617cbeaf1f93a1a4048bcda428a191c8ef4e1d58137a"} Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.123226 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3890669a2cd664aad88617cbeaf1f93a1a4048bcda428a191c8ef4e1d58137a" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.123904 4804 generic.go:334] "Generic (PLEG): container finished" podID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerID="b936b1f85b5d914a16d472ff712a5db48c0674a29e82c956ccf023610946a7cb" exitCode=0 Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.123952 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7c5c969-c4c2-4f76-b3c6-152473159e78","Type":"ContainerDied","Data":"b936b1f85b5d914a16d472ff712a5db48c0674a29e82c956ccf023610946a7cb"} Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.139261 4804 generic.go:334] "Generic (PLEG): container finished" podID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerID="938917cd0b60c23765326c3b0e216a34a5756c286f26d1223873445f92cad09a" exitCode=0 Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.139351 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76d127f1-97d9-4552-9bdb-b3482a45951d","Type":"ContainerDied","Data":"938917cd0b60c23765326c3b0e216a34a5756c286f26d1223873445f92cad09a"} Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.271828 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.380425 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-nb\") pod \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.380492 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-dns-svc\") pod \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.380693 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q9hh\" (UniqueName: \"kubernetes.io/projected/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-kube-api-access-9q9hh\") pod \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.380770 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-sb\") pod \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.380853 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-config\") pod \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.389675 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-kube-api-access-9q9hh" (OuterVolumeSpecName: "kube-api-access-9q9hh") pod "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" (UID: "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c"). InnerVolumeSpecName "kube-api-access-9q9hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.418835 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-config" (OuterVolumeSpecName: "config") pod "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" (UID: "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.425515 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" (UID: "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.438636 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" (UID: "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.445807 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" (UID: "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.483509 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q9hh\" (UniqueName: \"kubernetes.io/projected/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-kube-api-access-9q9hh\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.483543 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.483552 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.483560 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.483569 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.563405 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.568632 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.584642 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1029fc-e131-4d00-b538-6f0a17674c75-operator-scripts\") pod \"8b1029fc-e131-4d00-b538-6f0a17674c75\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.584689 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-297x5\" (UniqueName: \"kubernetes.io/projected/903b6b99-b94d-428a-9c9c-7465ef27ad40-kube-api-access-297x5\") pod \"903b6b99-b94d-428a-9c9c-7465ef27ad40\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.584851 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903b6b99-b94d-428a-9c9c-7465ef27ad40-operator-scripts\") pod \"903b6b99-b94d-428a-9c9c-7465ef27ad40\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.584908 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g97lw\" (UniqueName: \"kubernetes.io/projected/8b1029fc-e131-4d00-b538-6f0a17674c75-kube-api-access-g97lw\") pod \"8b1029fc-e131-4d00-b538-6f0a17674c75\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.587794 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/903b6b99-b94d-428a-9c9c-7465ef27ad40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "903b6b99-b94d-428a-9c9c-7465ef27ad40" (UID: "903b6b99-b94d-428a-9c9c-7465ef27ad40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.591568 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1029fc-e131-4d00-b538-6f0a17674c75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b1029fc-e131-4d00-b538-6f0a17674c75" (UID: "8b1029fc-e131-4d00-b538-6f0a17674c75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.591976 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1029fc-e131-4d00-b538-6f0a17674c75-kube-api-access-g97lw" (OuterVolumeSpecName: "kube-api-access-g97lw") pod "8b1029fc-e131-4d00-b538-6f0a17674c75" (UID: "8b1029fc-e131-4d00-b538-6f0a17674c75"). InnerVolumeSpecName "kube-api-access-g97lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.595551 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/903b6b99-b94d-428a-9c9c-7465ef27ad40-kube-api-access-297x5" (OuterVolumeSpecName: "kube-api-access-297x5") pod "903b6b99-b94d-428a-9c9c-7465ef27ad40" (UID: "903b6b99-b94d-428a-9c9c-7465ef27ad40"). InnerVolumeSpecName "kube-api-access-297x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.686582 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1029fc-e131-4d00-b538-6f0a17674c75-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.686623 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-297x5\" (UniqueName: \"kubernetes.io/projected/903b6b99-b94d-428a-9c9c-7465ef27ad40-kube-api-access-297x5\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.686640 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903b6b99-b94d-428a-9c9c-7465ef27ad40-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.686656 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g97lw\" (UniqueName: \"kubernetes.io/projected/8b1029fc-e131-4d00-b538-6f0a17674c75-kube-api-access-g97lw\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.148985 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.149082 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vnmsg" event={"ID":"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c","Type":"ContainerDied","Data":"1020242618b1dde6f3aaf71e5aba360d809d5638f6a17f243b95e504e340485b"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.149150 4804 scope.go:117] "RemoveContainer" containerID="5418c7b289f056b4f05b6a342643efaf91956352be4b6ee33c7e9e02353ffd7b" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.151967 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.152684 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zvgmg" event={"ID":"8b1029fc-e131-4d00-b538-6f0a17674c75","Type":"ContainerDied","Data":"75293dc771af25680556ee3acb3f64f045ce3898abcd88a264facd4d2213169b"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.152722 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75293dc771af25680556ee3acb3f64f045ce3898abcd88a264facd4d2213169b" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.155739 4804 generic.go:334] "Generic (PLEG): container finished" podID="08795da4-549f-437a-9113-51d1003b5668" containerID="f7789d2bdd1334c4462a3af29ff8ca19fc4d47aa63dc768208c1612ddcee666a" exitCode=0 Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.155806 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ea29-account-create-update-fd9sb" event={"ID":"08795da4-549f-437a-9113-51d1003b5668","Type":"ContainerDied","Data":"f7789d2bdd1334c4462a3af29ff8ca19fc4d47aa63dc768208c1612ddcee666a"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.157218 4804 generic.go:334] "Generic (PLEG): container finished" podID="a4586997-59ed-4e13-b7ec-3146711f642c" containerID="07d005b2c14a47d4da694ee14fd26759eafe1775650f3812e43c2a15c848c61f" exitCode=0 Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.157285 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f8f4-account-create-update-mg2gd" event={"ID":"a4586997-59ed-4e13-b7ec-3146711f642c","Type":"ContainerDied","Data":"07d005b2c14a47d4da694ee14fd26759eafe1775650f3812e43c2a15c848c61f"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.159199 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76d127f1-97d9-4552-9bdb-b3482a45951d","Type":"ContainerStarted","Data":"a7bcd4c4937ab18a41cb4959a39743e78382843e721b78db4c0a6c20de518e0c"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.159381 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.160438 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7c5c969-c4c2-4f76-b3c6-152473159e78","Type":"ContainerStarted","Data":"95dfda03211e6c344c512015a17826e376bdb3ad7fb59bc5821bb495def03e2b"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.160826 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.164283 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vmdbt" event={"ID":"903b6b99-b94d-428a-9c9c-7465ef27ad40","Type":"ContainerDied","Data":"ab3f0d513ec7bb60cd7ed02802956a145c0323826eed1b2f7bb7ee645c397fe0"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.164332 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab3f0d513ec7bb60cd7ed02802956a145c0323826eed1b2f7bb7ee645c397fe0" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.164421 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.202733 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=61.202712416 podStartE2EDuration="1m1.202712416s" podCreationTimestamp="2026-01-28 11:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:45.198107498 +0000 UTC m=+1120.992987482" watchObservedRunningTime="2026-01-28 11:40:45.202712416 +0000 UTC m=+1120.997592400" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.221010 4804 scope.go:117] "RemoveContainer" containerID="a99548645bbd8f2136f9f7fb1affc4d254741865c846f3d3f9116fc59fc1d178" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.234505 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.439972743 podStartE2EDuration="1m2.234486916s" podCreationTimestamp="2026-01-28 11:39:43 +0000 UTC" firstStartedPulling="2026-01-28 11:39:45.417277105 +0000 UTC m=+1061.212157089" lastFinishedPulling="2026-01-28 11:40:08.211791278 +0000 UTC m=+1084.006671262" observedRunningTime="2026-01-28 11:40:45.227849786 +0000 UTC m=+1121.022729770" watchObservedRunningTime="2026-01-28 11:40:45.234486916 +0000 UTC m=+1121.029366900" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.280177 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vnmsg"] Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.289505 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vnmsg"] Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.173818 4804 generic.go:334] "Generic (PLEG): container finished" podID="38148c07-9662-4f0b-8285-a02633a7cd37" containerID="33b6a6135853b57c0111bf580d3d2c2cfc12a6ddcba054451c960f37e0cda40d" exitCode=0 Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.173912 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ec8f-account-create-update-wm9f2" event={"ID":"38148c07-9662-4f0b-8285-a02633a7cd37","Type":"ContainerDied","Data":"33b6a6135853b57c0111bf580d3d2c2cfc12a6ddcba054451c960f37e0cda40d"} Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.504004 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.619819 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k2hk\" (UniqueName: \"kubernetes.io/projected/08795da4-549f-437a-9113-51d1003b5668-kube-api-access-7k2hk\") pod \"08795da4-549f-437a-9113-51d1003b5668\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.619973 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08795da4-549f-437a-9113-51d1003b5668-operator-scripts\") pod \"08795da4-549f-437a-9113-51d1003b5668\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.620352 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08795da4-549f-437a-9113-51d1003b5668-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08795da4-549f-437a-9113-51d1003b5668" (UID: "08795da4-549f-437a-9113-51d1003b5668"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.620618 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08795da4-549f-437a-9113-51d1003b5668-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.629148 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08795da4-549f-437a-9113-51d1003b5668-kube-api-access-7k2hk" (OuterVolumeSpecName: "kube-api-access-7k2hk") pod "08795da4-549f-437a-9113-51d1003b5668" (UID: "08795da4-549f-437a-9113-51d1003b5668"). InnerVolumeSpecName "kube-api-access-7k2hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.668983 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.722281 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k2hk\" (UniqueName: \"kubernetes.io/projected/08795da4-549f-437a-9113-51d1003b5668-kube-api-access-7k2hk\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.737562 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.823395 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x26qf\" (UniqueName: \"kubernetes.io/projected/a4586997-59ed-4e13-b7ec-3146711f642c-kube-api-access-x26qf\") pod \"a4586997-59ed-4e13-b7ec-3146711f642c\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.823570 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4586997-59ed-4e13-b7ec-3146711f642c-operator-scripts\") pod \"a4586997-59ed-4e13-b7ec-3146711f642c\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.823969 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4586997-59ed-4e13-b7ec-3146711f642c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4586997-59ed-4e13-b7ec-3146711f642c" (UID: "a4586997-59ed-4e13-b7ec-3146711f642c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.824103 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4586997-59ed-4e13-b7ec-3146711f642c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.828032 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4586997-59ed-4e13-b7ec-3146711f642c-kube-api-access-x26qf" (OuterVolumeSpecName: "kube-api-access-x26qf") pod "a4586997-59ed-4e13-b7ec-3146711f642c" (UID: "a4586997-59ed-4e13-b7ec-3146711f642c"). InnerVolumeSpecName "kube-api-access-x26qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.924301 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" path="/var/lib/kubelet/pods/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c/volumes" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.926682 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x26qf\" (UniqueName: \"kubernetes.io/projected/a4586997-59ed-4e13-b7ec-3146711f642c-kube-api-access-x26qf\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.093323 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-w544f"] Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.094037 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.094186 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.094299 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4586997-59ed-4e13-b7ec-3146711f642c" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.094392 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4586997-59ed-4e13-b7ec-3146711f642c" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.094487 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1029fc-e131-4d00-b538-6f0a17674c75" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.094586 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1029fc-e131-4d00-b538-6f0a17674c75" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.094680 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="dnsmasq-dns" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.094782 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="dnsmasq-dns" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.094873 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54fa6273-e08e-4dbb-a86b-a8951e4100fa" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.094959 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="54fa6273-e08e-4dbb-a86b-a8951e4100fa" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.095079 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08795da4-549f-437a-9113-51d1003b5668" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095169 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="08795da4-549f-437a-9113-51d1003b5668" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.095254 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903b6b99-b94d-428a-9c9c-7465ef27ad40" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095330 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="903b6b99-b94d-428a-9c9c-7465ef27ad40" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.095387 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="init" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095453 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="init" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095671 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="08795da4-549f-437a-9113-51d1003b5668" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095745 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="dnsmasq-dns" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095814 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="54fa6273-e08e-4dbb-a86b-a8951e4100fa" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095872 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4586997-59ed-4e13-b7ec-3146711f642c" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095957 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1029fc-e131-4d00-b538-6f0a17674c75" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.096019 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.096097 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="903b6b99-b94d-428a-9c9c-7465ef27ad40" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.096924 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.099183 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.110681 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-w544f"] Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.185254 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f8f4-account-create-update-mg2gd" event={"ID":"a4586997-59ed-4e13-b7ec-3146711f642c","Type":"ContainerDied","Data":"9b946f5044f4683cff756fe0f12a25635572e5f13216b661e8d5088a9dd3482b"} Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.185304 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b946f5044f4683cff756fe0f12a25635572e5f13216b661e8d5088a9dd3482b" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.185287 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.190981 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.191562 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ea29-account-create-update-fd9sb" event={"ID":"08795da4-549f-437a-9113-51d1003b5668","Type":"ContainerDied","Data":"ddb1f30d4961cdeec5b26416a480e4c0b1a3e9e39eedab64e0edf4f1452782c2"} Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.191581 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddb1f30d4961cdeec5b26416a480e4c0b1a3e9e39eedab64e0edf4f1452782c2" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.232903 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rxf4\" (UniqueName: \"kubernetes.io/projected/da587a6a-8109-4c08-8395-f4cd6b078dc7-kube-api-access-8rxf4\") pod \"root-account-create-update-w544f\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.233338 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da587a6a-8109-4c08-8395-f4cd6b078dc7-operator-scripts\") pod \"root-account-create-update-w544f\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.336313 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rxf4\" (UniqueName: \"kubernetes.io/projected/da587a6a-8109-4c08-8395-f4cd6b078dc7-kube-api-access-8rxf4\") pod \"root-account-create-update-w544f\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.336445 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da587a6a-8109-4c08-8395-f4cd6b078dc7-operator-scripts\") pod \"root-account-create-update-w544f\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.337160 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da587a6a-8109-4c08-8395-f4cd6b078dc7-operator-scripts\") pod \"root-account-create-update-w544f\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.355170 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rxf4\" (UniqueName: \"kubernetes.io/projected/da587a6a-8109-4c08-8395-f4cd6b078dc7-kube-api-access-8rxf4\") pod \"root-account-create-update-w544f\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.421132 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.586968 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.647660 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.678862 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.748839 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38148c07-9662-4f0b-8285-a02633a7cd37-operator-scripts\") pod \"38148c07-9662-4f0b-8285-a02633a7cd37\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.749025 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnz5l\" (UniqueName: \"kubernetes.io/projected/38148c07-9662-4f0b-8285-a02633a7cd37-kube-api-access-rnz5l\") pod \"38148c07-9662-4f0b-8285-a02633a7cd37\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.750111 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38148c07-9662-4f0b-8285-a02633a7cd37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38148c07-9662-4f0b-8285-a02633a7cd37" (UID: "38148c07-9662-4f0b-8285-a02633a7cd37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.752785 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38148c07-9662-4f0b-8285-a02633a7cd37-kube-api-access-rnz5l" (OuterVolumeSpecName: "kube-api-access-rnz5l") pod "38148c07-9662-4f0b-8285-a02633a7cd37" (UID: "38148c07-9662-4f0b-8285-a02633a7cd37"). InnerVolumeSpecName "kube-api-access-rnz5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.851306 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38148c07-9662-4f0b-8285-a02633a7cd37-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.851343 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnz5l\" (UniqueName: \"kubernetes.io/projected/38148c07-9662-4f0b-8285-a02633a7cd37-kube-api-access-rnz5l\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.940014 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.949254 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-w544f"] Jan 28 11:40:47 crc kubenswrapper[4804]: W0128 11:40:47.955978 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda587a6a_8109_4c08_8395_f4cd6b078dc7.slice/crio-6fd7b7585096ea4903a88faf2296a1b1df346f4826181a40d086b11a9e71ea76 WatchSource:0}: Error finding container 6fd7b7585096ea4903a88faf2296a1b1df346f4826181a40d086b11a9e71ea76: Status 404 returned error can't find the container with id 6fd7b7585096ea4903a88faf2296a1b1df346f4826181a40d086b11a9e71ea76 Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.197857 4804 generic.go:334] "Generic (PLEG): container finished" podID="cb46a04b-0e73-46fb-bcdf-a670c30d5531" containerID="acc629a29baa94b90886caa052a9712308190fcbd858f031b8ca85b990fe85e5" exitCode=0 Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.197937 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jxgc9" event={"ID":"cb46a04b-0e73-46fb-bcdf-a670c30d5531","Type":"ContainerDied","Data":"acc629a29baa94b90886caa052a9712308190fcbd858f031b8ca85b990fe85e5"} Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.200188 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ec8f-account-create-update-wm9f2" event={"ID":"38148c07-9662-4f0b-8285-a02633a7cd37","Type":"ContainerDied","Data":"1cdec6eb1be633affff1b7b15a04d38540b48582466e56d387986c60aa1a5c76"} Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.200218 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cdec6eb1be633affff1b7b15a04d38540b48582466e56d387986c60aa1a5c76" Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.200266 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.202954 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w544f" event={"ID":"da587a6a-8109-4c08-8395-f4cd6b078dc7","Type":"ContainerStarted","Data":"1458a9f0fdf6329fef09a5d8735c3d60b67ac3518f533ed20b00b17805f5df6e"} Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.202987 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w544f" event={"ID":"da587a6a-8109-4c08-8395-f4cd6b078dc7","Type":"ContainerStarted","Data":"6fd7b7585096ea4903a88faf2296a1b1df346f4826181a40d086b11a9e71ea76"} Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.241046 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-w544f" podStartSLOduration=1.241031468 podStartE2EDuration="1.241031468s" podCreationTimestamp="2026-01-28 11:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:48.231461884 +0000 UTC m=+1124.026341868" watchObservedRunningTime="2026-01-28 11:40:48.241031468 +0000 UTC m=+1124.035911452" Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.502027 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.210551 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"4a5bec567872839575faf98626366f5cc236d0134aa37c746f2c87478bb70e91"} Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.211502 4804 generic.go:334] "Generic (PLEG): container finished" podID="da587a6a-8109-4c08-8395-f4cd6b078dc7" containerID="1458a9f0fdf6329fef09a5d8735c3d60b67ac3518f533ed20b00b17805f5df6e" exitCode=0 Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.211970 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w544f" event={"ID":"da587a6a-8109-4c08-8395-f4cd6b078dc7","Type":"ContainerDied","Data":"1458a9f0fdf6329fef09a5d8735c3d60b67ac3518f533ed20b00b17805f5df6e"} Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.324843 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bnpvd"] Jan 28 11:40:49 crc kubenswrapper[4804]: E0128 11:40:49.325682 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38148c07-9662-4f0b-8285-a02633a7cd37" containerName="mariadb-account-create-update" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.325694 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="38148c07-9662-4f0b-8285-a02633a7cd37" containerName="mariadb-account-create-update" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.325859 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="38148c07-9662-4f0b-8285-a02633a7cd37" containerName="mariadb-account-create-update" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.326367 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.329206 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.329422 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dv6zq" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.347419 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bnpvd"] Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.479505 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-config-data\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.479609 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-combined-ca-bundle\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.479636 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-db-sync-config-data\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.479685 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wn8b\" (UniqueName: \"kubernetes.io/projected/d5916f11-436f-46f9-b76e-304aa86f91a1-kube-api-access-9wn8b\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.581513 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-config-data\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.581576 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-combined-ca-bundle\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.581609 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-db-sync-config-data\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.581662 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wn8b\" (UniqueName: \"kubernetes.io/projected/d5916f11-436f-46f9-b76e-304aa86f91a1-kube-api-access-9wn8b\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.593186 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-db-sync-config-data\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.594024 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-config-data\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.594178 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-combined-ca-bundle\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.604218 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wn8b\" (UniqueName: \"kubernetes.io/projected/d5916f11-436f-46f9-b76e-304aa86f91a1-kube-api-access-9wn8b\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.658554 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.808683 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.990608 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-ring-data-devices\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991116 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmwrr\" (UniqueName: \"kubernetes.io/projected/cb46a04b-0e73-46fb-bcdf-a670c30d5531-kube-api-access-gmwrr\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991155 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-swiftconf\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991212 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb46a04b-0e73-46fb-bcdf-a670c30d5531-etc-swift\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991262 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-combined-ca-bundle\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991300 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-scripts\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991347 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-dispersionconf\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991510 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991686 4804 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.992342 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb46a04b-0e73-46fb-bcdf-a670c30d5531-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.998523 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb46a04b-0e73-46fb-bcdf-a670c30d5531-kube-api-access-gmwrr" (OuterVolumeSpecName: "kube-api-access-gmwrr") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "kube-api-access-gmwrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.003003 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.015599 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-scripts" (OuterVolumeSpecName: "scripts") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.019182 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.021111 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.092980 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmwrr\" (UniqueName: \"kubernetes.io/projected/cb46a04b-0e73-46fb-bcdf-a670c30d5531-kube-api-access-gmwrr\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.093024 4804 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.093037 4804 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb46a04b-0e73-46fb-bcdf-a670c30d5531-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.093049 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.093061 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.093072 4804 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.236183 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jxgc9" event={"ID":"cb46a04b-0e73-46fb-bcdf-a670c30d5531","Type":"ContainerDied","Data":"54320a0848015693cbe26d3d7b9ea2f77d1e8b3b63d64b6c373c37c52c3bf565"} Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.236229 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54320a0848015693cbe26d3d7b9ea2f77d1e8b3b63d64b6c373c37c52c3bf565" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.236307 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.247111 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb"} Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.247163 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf"} Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.277587 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bnpvd"] Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.764163 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w544f" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.911809 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da587a6a-8109-4c08-8395-f4cd6b078dc7-operator-scripts\") pod \"da587a6a-8109-4c08-8395-f4cd6b078dc7\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.911974 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rxf4\" (UniqueName: \"kubernetes.io/projected/da587a6a-8109-4c08-8395-f4cd6b078dc7-kube-api-access-8rxf4\") pod \"da587a6a-8109-4c08-8395-f4cd6b078dc7\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.913944 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da587a6a-8109-4c08-8395-f4cd6b078dc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da587a6a-8109-4c08-8395-f4cd6b078dc7" (UID: "da587a6a-8109-4c08-8395-f4cd6b078dc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.916270 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da587a6a-8109-4c08-8395-f4cd6b078dc7-kube-api-access-8rxf4" (OuterVolumeSpecName: "kube-api-access-8rxf4") pod "da587a6a-8109-4c08-8395-f4cd6b078dc7" (UID: "da587a6a-8109-4c08-8395-f4cd6b078dc7"). InnerVolumeSpecName "kube-api-access-8rxf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.013901 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da587a6a-8109-4c08-8395-f4cd6b078dc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.014403 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rxf4\" (UniqueName: \"kubernetes.io/projected/da587a6a-8109-4c08-8395-f4cd6b078dc7-kube-api-access-8rxf4\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.258098 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bnpvd" event={"ID":"d5916f11-436f-46f9-b76e-304aa86f91a1","Type":"ContainerStarted","Data":"98e4e548f770aa987da379b1ee8df638450d9e9a8748002b4fc5eb02b710f97e"} Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.274000 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5"} Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.274050 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a"} Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.277207 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w544f" event={"ID":"da587a6a-8109-4c08-8395-f4cd6b078dc7","Type":"ContainerDied","Data":"6fd7b7585096ea4903a88faf2296a1b1df346f4826181a40d086b11a9e71ea76"} Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.277229 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fd7b7585096ea4903a88faf2296a1b1df346f4826181a40d086b11a9e71ea76" Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.277337 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w544f" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.036090 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xtdr8" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" probeResult="failure" output=< Jan 28 11:40:53 crc kubenswrapper[4804]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 11:40:53 crc kubenswrapper[4804]: > Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.129342 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.193808 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.304746 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657"} Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.304868 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161"} Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.304897 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6"} Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.304947 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20"} Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.452479 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xtdr8-config-gnhrx"] Jan 28 11:40:53 crc kubenswrapper[4804]: E0128 11:40:53.455372 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da587a6a-8109-4c08-8395-f4cd6b078dc7" containerName="mariadb-account-create-update" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.455408 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="da587a6a-8109-4c08-8395-f4cd6b078dc7" containerName="mariadb-account-create-update" Jan 28 11:40:53 crc kubenswrapper[4804]: E0128 11:40:53.455467 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb46a04b-0e73-46fb-bcdf-a670c30d5531" containerName="swift-ring-rebalance" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.455474 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb46a04b-0e73-46fb-bcdf-a670c30d5531" containerName="swift-ring-rebalance" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.455639 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb46a04b-0e73-46fb-bcdf-a670c30d5531" containerName="swift-ring-rebalance" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.455659 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="da587a6a-8109-4c08-8395-f4cd6b078dc7" containerName="mariadb-account-create-update" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.456260 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.458106 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.468140 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xtdr8-config-gnhrx"] Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.502711 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-log-ovn\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.502841 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run-ovn\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.502904 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.503018 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-additional-scripts\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.503056 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-scripts\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.503150 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5fxz\" (UniqueName: \"kubernetes.io/projected/f4c1d6ce-c590-416e-bca1-300d36330497-kube-api-access-n5fxz\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605126 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-log-ovn\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605216 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run-ovn\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605248 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605307 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-additional-scripts\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605341 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-scripts\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605394 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5fxz\" (UniqueName: \"kubernetes.io/projected/f4c1d6ce-c590-416e-bca1-300d36330497-kube-api-access-n5fxz\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605506 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-log-ovn\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605517 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605511 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run-ovn\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.606271 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-additional-scripts\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.607548 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-scripts\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.633820 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5fxz\" (UniqueName: \"kubernetes.io/projected/f4c1d6ce-c590-416e-bca1-300d36330497-kube-api-access-n5fxz\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.780249 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:54 crc kubenswrapper[4804]: I0128 11:40:54.359522 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xtdr8-config-gnhrx"] Jan 28 11:40:54 crc kubenswrapper[4804]: I0128 11:40:54.974252 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.379852 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5"} Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.379988 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3"} Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.379998 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d"} Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.380007 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16"} Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.387049 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-gnhrx" event={"ID":"f4c1d6ce-c590-416e-bca1-300d36330497","Type":"ContainerStarted","Data":"afc5376aa5a4fb69874f078b35845b9a204c99fa74239aab619e23b2ca9f242b"} Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.387088 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-gnhrx" event={"ID":"f4c1d6ce-c590-416e-bca1-300d36330497","Type":"ContainerStarted","Data":"b514434fb6ce4745650cc1037c08aecd64c10f4b8e573fd164165ed6eb41a03d"} Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.751052 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.421613 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4c1d6ce-c590-416e-bca1-300d36330497" containerID="afc5376aa5a4fb69874f078b35845b9a204c99fa74239aab619e23b2ca9f242b" exitCode=0 Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.422240 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-gnhrx" event={"ID":"f4c1d6ce-c590-416e-bca1-300d36330497","Type":"ContainerDied","Data":"afc5376aa5a4fb69874f078b35845b9a204c99fa74239aab619e23b2ca9f242b"} Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.430919 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb"} Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.430966 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b"} Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.430980 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e"} Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.488384 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.511143241 podStartE2EDuration="26.488364053s" podCreationTimestamp="2026-01-28 11:40:30 +0000 UTC" firstStartedPulling="2026-01-28 11:40:48.523177879 +0000 UTC m=+1124.318057863" lastFinishedPulling="2026-01-28 11:40:54.500398691 +0000 UTC m=+1130.295278675" observedRunningTime="2026-01-28 11:40:56.482714193 +0000 UTC m=+1132.277594177" watchObservedRunningTime="2026-01-28 11:40:56.488364053 +0000 UTC m=+1132.283244037" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.774006 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-b7zpn"] Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.775718 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.779012 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.788675 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-b7zpn"] Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.897176 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.897258 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.897520 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.897659 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjp6c\" (UniqueName: \"kubernetes.io/projected/46956e08-e267-4021-bf42-69a3e35826e0-kube-api-access-hjp6c\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.897690 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.897821 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-config\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.999599 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.999661 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjp6c\" (UniqueName: \"kubernetes.io/projected/46956e08-e267-4021-bf42-69a3e35826e0-kube-api-access-hjp6c\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.999684 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.000587 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.000656 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.000680 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-config\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.000830 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.001058 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.001804 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.001999 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.002597 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-config\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.030838 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjp6c\" (UniqueName: \"kubernetes.io/projected/46956e08-e267-4021-bf42-69a3e35826e0-kube-api-access-hjp6c\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.095368 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.205873 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.308578 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-scripts\") pod \"f4c1d6ce-c590-416e-bca1-300d36330497\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.308726 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5fxz\" (UniqueName: \"kubernetes.io/projected/f4c1d6ce-c590-416e-bca1-300d36330497-kube-api-access-n5fxz\") pod \"f4c1d6ce-c590-416e-bca1-300d36330497\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.308755 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-log-ovn\") pod \"f4c1d6ce-c590-416e-bca1-300d36330497\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.308780 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run\") pod \"f4c1d6ce-c590-416e-bca1-300d36330497\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.308824 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-additional-scripts\") pod \"f4c1d6ce-c590-416e-bca1-300d36330497\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.308911 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run-ovn\") pod \"f4c1d6ce-c590-416e-bca1-300d36330497\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.309394 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f4c1d6ce-c590-416e-bca1-300d36330497" (UID: "f4c1d6ce-c590-416e-bca1-300d36330497"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.309675 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-scripts" (OuterVolumeSpecName: "scripts") pod "f4c1d6ce-c590-416e-bca1-300d36330497" (UID: "f4c1d6ce-c590-416e-bca1-300d36330497"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.309711 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run" (OuterVolumeSpecName: "var-run") pod "f4c1d6ce-c590-416e-bca1-300d36330497" (UID: "f4c1d6ce-c590-416e-bca1-300d36330497"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.309745 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f4c1d6ce-c590-416e-bca1-300d36330497" (UID: "f4c1d6ce-c590-416e-bca1-300d36330497"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.310020 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f4c1d6ce-c590-416e-bca1-300d36330497" (UID: "f4c1d6ce-c590-416e-bca1-300d36330497"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.357078 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4c1d6ce-c590-416e-bca1-300d36330497-kube-api-access-n5fxz" (OuterVolumeSpecName: "kube-api-access-n5fxz") pod "f4c1d6ce-c590-416e-bca1-300d36330497" (UID: "f4c1d6ce-c590-416e-bca1-300d36330497"). InnerVolumeSpecName "kube-api-access-n5fxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.413403 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.413432 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5fxz\" (UniqueName: \"kubernetes.io/projected/f4c1d6ce-c590-416e-bca1-300d36330497-kube-api-access-n5fxz\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.413445 4804 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.413454 4804 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.413462 4804 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.413472 4804 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.454737 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.454978 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-gnhrx" event={"ID":"f4c1d6ce-c590-416e-bca1-300d36330497","Type":"ContainerDied","Data":"b514434fb6ce4745650cc1037c08aecd64c10f4b8e573fd164165ed6eb41a03d"} Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.455451 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b514434fb6ce4745650cc1037c08aecd64c10f4b8e573fd164165ed6eb41a03d" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.650016 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-n6kfg"] Jan 28 11:40:57 crc kubenswrapper[4804]: E0128 11:40:57.650454 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4c1d6ce-c590-416e-bca1-300d36330497" containerName="ovn-config" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.650471 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4c1d6ce-c590-416e-bca1-300d36330497" containerName="ovn-config" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.650620 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4c1d6ce-c590-416e-bca1-300d36330497" containerName="ovn-config" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.651194 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.657904 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8522-account-create-update-rlttq"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.659204 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.663156 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.670638 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-n6kfg"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.677145 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8522-account-create-update-rlttq"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.721735 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12849043-1f8e-4d1f-aae3-9cbc35ea4361-operator-scripts\") pod \"barbican-8522-account-create-update-rlttq\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.721852 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ea6e04-5420-4f5b-911f-cdaede8220ab-operator-scripts\") pod \"barbican-db-create-n6kfg\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.721905 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqx6s\" (UniqueName: \"kubernetes.io/projected/04ea6e04-5420-4f5b-911f-cdaede8220ab-kube-api-access-wqx6s\") pod \"barbican-db-create-n6kfg\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.721969 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zqzg\" (UniqueName: \"kubernetes.io/projected/12849043-1f8e-4d1f-aae3-9cbc35ea4361-kube-api-access-8zqzg\") pod \"barbican-8522-account-create-update-rlttq\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.731269 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-jqlrv"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.732327 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.751745 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jqlrv"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.810866 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-b7zpn"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.834962 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12849043-1f8e-4d1f-aae3-9cbc35ea4361-operator-scripts\") pod \"barbican-8522-account-create-update-rlttq\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835042 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc6a2a42-6519-46c6-bb24-074e5096001f-operator-scripts\") pod \"cinder-db-create-jqlrv\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835082 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ea6e04-5420-4f5b-911f-cdaede8220ab-operator-scripts\") pod \"barbican-db-create-n6kfg\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835119 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqx6s\" (UniqueName: \"kubernetes.io/projected/04ea6e04-5420-4f5b-911f-cdaede8220ab-kube-api-access-wqx6s\") pod \"barbican-db-create-n6kfg\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835151 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7qzm\" (UniqueName: \"kubernetes.io/projected/dc6a2a42-6519-46c6-bb24-074e5096001f-kube-api-access-b7qzm\") pod \"cinder-db-create-jqlrv\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835188 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zqzg\" (UniqueName: \"kubernetes.io/projected/12849043-1f8e-4d1f-aae3-9cbc35ea4361-kube-api-access-8zqzg\") pod \"barbican-8522-account-create-update-rlttq\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835783 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12849043-1f8e-4d1f-aae3-9cbc35ea4361-operator-scripts\") pod \"barbican-8522-account-create-update-rlttq\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835802 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ea6e04-5420-4f5b-911f-cdaede8220ab-operator-scripts\") pod \"barbican-db-create-n6kfg\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.850815 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zqzg\" (UniqueName: \"kubernetes.io/projected/12849043-1f8e-4d1f-aae3-9cbc35ea4361-kube-api-access-8zqzg\") pod \"barbican-8522-account-create-update-rlttq\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.854436 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqx6s\" (UniqueName: \"kubernetes.io/projected/04ea6e04-5420-4f5b-911f-cdaede8220ab-kube-api-access-wqx6s\") pod \"barbican-db-create-n6kfg\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.914983 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-5r69w"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.916210 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.918662 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.919107 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xcgbx" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.919371 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.920055 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.936894 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5r69w"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.937943 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7qzm\" (UniqueName: \"kubernetes.io/projected/dc6a2a42-6519-46c6-bb24-074e5096001f-kube-api-access-b7qzm\") pod \"cinder-db-create-jqlrv\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.938133 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc6a2a42-6519-46c6-bb24-074e5096001f-operator-scripts\") pod \"cinder-db-create-jqlrv\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.938790 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc6a2a42-6519-46c6-bb24-074e5096001f-operator-scripts\") pod \"cinder-db-create-jqlrv\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.968184 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-753f-account-create-update-2x2r6"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.979222 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.984303 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.987195 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-753f-account-create-update-2x2r6"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.988197 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.989367 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:57.997783 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7qzm\" (UniqueName: \"kubernetes.io/projected/dc6a2a42-6519-46c6-bb24-074e5096001f-kube-api-access-b7qzm\") pod \"cinder-db-create-jqlrv\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.040551 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htpjf\" (UniqueName: \"kubernetes.io/projected/79faecc7-1388-420a-9eee-b47d0ce87f34-kube-api-access-htpjf\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.040630 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-combined-ca-bundle\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.040672 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-config-data\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.066914 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-kcr62"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.068073 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.074792 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a291-account-create-update-dlt8t"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.076582 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.078917 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.081290 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xtdr8" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.083350 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.091102 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kcr62"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.110750 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a291-account-create-update-dlt8t"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142518 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8jcp\" (UniqueName: \"kubernetes.io/projected/57723f90-020a-42b7-ad6c-49e998417f27-kube-api-access-n8jcp\") pod \"neutron-a291-account-create-update-dlt8t\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142579 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6ct4\" (UniqueName: \"kubernetes.io/projected/518f34a2-84c4-4115-a28d-0251d0fa8064-kube-api-access-m6ct4\") pod \"neutron-db-create-kcr62\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142619 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57723f90-020a-42b7-ad6c-49e998417f27-operator-scripts\") pod \"neutron-a291-account-create-update-dlt8t\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142651 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-operator-scripts\") pod \"cinder-753f-account-create-update-2x2r6\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142675 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f34a2-84c4-4115-a28d-0251d0fa8064-operator-scripts\") pod \"neutron-db-create-kcr62\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142703 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxhhb\" (UniqueName: \"kubernetes.io/projected/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-kube-api-access-lxhhb\") pod \"cinder-753f-account-create-update-2x2r6\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142747 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htpjf\" (UniqueName: \"kubernetes.io/projected/79faecc7-1388-420a-9eee-b47d0ce87f34-kube-api-access-htpjf\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142785 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-combined-ca-bundle\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142813 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-config-data\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.147473 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-combined-ca-bundle\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.163247 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-config-data\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.164084 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htpjf\" (UniqueName: \"kubernetes.io/projected/79faecc7-1388-420a-9eee-b47d0ce87f34-kube-api-access-htpjf\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.243960 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-operator-scripts\") pod \"cinder-753f-account-create-update-2x2r6\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.244010 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f34a2-84c4-4115-a28d-0251d0fa8064-operator-scripts\") pod \"neutron-db-create-kcr62\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.244040 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxhhb\" (UniqueName: \"kubernetes.io/projected/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-kube-api-access-lxhhb\") pod \"cinder-753f-account-create-update-2x2r6\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.244191 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8jcp\" (UniqueName: \"kubernetes.io/projected/57723f90-020a-42b7-ad6c-49e998417f27-kube-api-access-n8jcp\") pod \"neutron-a291-account-create-update-dlt8t\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.244213 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6ct4\" (UniqueName: \"kubernetes.io/projected/518f34a2-84c4-4115-a28d-0251d0fa8064-kube-api-access-m6ct4\") pod \"neutron-db-create-kcr62\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.244239 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57723f90-020a-42b7-ad6c-49e998417f27-operator-scripts\") pod \"neutron-a291-account-create-update-dlt8t\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.244825 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-operator-scripts\") pod \"cinder-753f-account-create-update-2x2r6\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.245731 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f34a2-84c4-4115-a28d-0251d0fa8064-operator-scripts\") pod \"neutron-db-create-kcr62\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.246488 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57723f90-020a-42b7-ad6c-49e998417f27-operator-scripts\") pod \"neutron-a291-account-create-update-dlt8t\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.251116 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.263374 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxhhb\" (UniqueName: \"kubernetes.io/projected/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-kube-api-access-lxhhb\") pod \"cinder-753f-account-create-update-2x2r6\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.267402 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6ct4\" (UniqueName: \"kubernetes.io/projected/518f34a2-84c4-4115-a28d-0251d0fa8064-kube-api-access-m6ct4\") pod \"neutron-db-create-kcr62\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.280862 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8jcp\" (UniqueName: \"kubernetes.io/projected/57723f90-020a-42b7-ad6c-49e998417f27-kube-api-access-n8jcp\") pod \"neutron-a291-account-create-update-dlt8t\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.316392 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xtdr8-config-gnhrx"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.330294 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xtdr8-config-gnhrx"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.364375 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.388839 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.407650 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.428635 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xtdr8-config-zc8nk"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.429990 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.440367 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.461367 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" event={"ID":"46956e08-e267-4021-bf42-69a3e35826e0","Type":"ContainerStarted","Data":"77e9032d4b1d0896ab98b1033b917f2c0d9b702e320f4756d982cdbd575cb2f8"} Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.466412 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xtdr8-config-zc8nk"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.550175 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-log-ovn\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.550222 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-additional-scripts\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.550385 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9lkd\" (UniqueName: \"kubernetes.io/projected/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-kube-api-access-p9lkd\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.550428 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-scripts\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.550570 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.550679 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run-ovn\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.652605 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.652851 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run-ovn\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.652927 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-log-ovn\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.652952 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-additional-scripts\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.652999 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9lkd\" (UniqueName: \"kubernetes.io/projected/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-kube-api-access-p9lkd\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.653018 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-scripts\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.653016 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.653085 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-log-ovn\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.653118 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run-ovn\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.654411 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-additional-scripts\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.655855 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-scripts\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.679211 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9lkd\" (UniqueName: \"kubernetes.io/projected/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-kube-api-access-p9lkd\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.746013 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.948120 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4c1d6ce-c590-416e-bca1-300d36330497" path="/var/lib/kubelet/pods/f4c1d6ce-c590-416e-bca1-300d36330497/volumes" Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.361829 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8522-account-create-update-rlttq"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.470494 4804 generic.go:334] "Generic (PLEG): container finished" podID="46956e08-e267-4021-bf42-69a3e35826e0" containerID="0e30a6113bcc313e3cf69e2a658168ba99f0082887992b529bd0b556c9a4b494" exitCode=0 Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.470737 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" event={"ID":"46956e08-e267-4021-bf42-69a3e35826e0","Type":"ContainerDied","Data":"0e30a6113bcc313e3cf69e2a658168ba99f0082887992b529bd0b556c9a4b494"} Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.535044 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kcr62"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.549906 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-n6kfg"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.560396 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5r69w"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.571799 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jqlrv"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.583093 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-753f-account-create-update-2x2r6"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.591281 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a291-account-create-update-dlt8t"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.643001 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xtdr8-config-zc8nk"] Jan 28 11:41:06 crc kubenswrapper[4804]: W0128 11:41:06.024547 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79faecc7_1388_420a_9eee_b47d0ce87f34.slice/crio-e1f18bcac4c6c36a072d92d374a23648dd2943d7e2139948bf7101c4c1a6cff4 WatchSource:0}: Error finding container e1f18bcac4c6c36a072d92d374a23648dd2943d7e2139948bf7101c4c1a6cff4: Status 404 returned error can't find the container with id e1f18bcac4c6c36a072d92d374a23648dd2943d7e2139948bf7101c4c1a6cff4 Jan 28 11:41:06 crc kubenswrapper[4804]: W0128 11:41:06.043041 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod518f34a2_84c4_4115_a28d_0251d0fa8064.slice/crio-516d57eaaff9063dd68dbb3a8b9ed031a8ba9b58b4e79c35bb862291bdb42ced WatchSource:0}: Error finding container 516d57eaaff9063dd68dbb3a8b9ed031a8ba9b58b4e79c35bb862291bdb42ced: Status 404 returned error can't find the container with id 516d57eaaff9063dd68dbb3a8b9ed031a8ba9b58b4e79c35bb862291bdb42ced Jan 28 11:41:06 crc kubenswrapper[4804]: W0128 11:41:06.049046 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa2fee1_7544_426c_8cbf_17e7a2b1693c.slice/crio-7fda529b88e3987c4ae4849b30d23d832271becc6d93ff3c2b38ec3458bd36e2 WatchSource:0}: Error finding container 7fda529b88e3987c4ae4849b30d23d832271becc6d93ff3c2b38ec3458bd36e2: Status 404 returned error can't find the container with id 7fda529b88e3987c4ae4849b30d23d832271becc6d93ff3c2b38ec3458bd36e2 Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.052399 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.052406 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.052405 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.575256 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8522-account-create-update-rlttq" event={"ID":"12849043-1f8e-4d1f-aae3-9cbc35ea4361","Type":"ContainerStarted","Data":"6ce17aece748b9da79e3085fe6d476a5deab47316ec4672ba0cbe650d2deca37"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.575581 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8522-account-create-update-rlttq" event={"ID":"12849043-1f8e-4d1f-aae3-9cbc35ea4361","Type":"ContainerStarted","Data":"550e26edd6ee8229306ffc708faae50e44129550e0ff20f7f29fa20dce60c760"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.584220 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5r69w" event={"ID":"79faecc7-1388-420a-9eee-b47d0ce87f34","Type":"ContainerStarted","Data":"e1f18bcac4c6c36a072d92d374a23648dd2943d7e2139948bf7101c4c1a6cff4"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.598457 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" event={"ID":"46956e08-e267-4021-bf42-69a3e35826e0","Type":"ContainerStarted","Data":"300764162d368ffd0de5fe83665369a6aa0f7d774b37e1517a5bc0a601f70ad4"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.598716 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.602407 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-8522-account-create-update-rlttq" podStartSLOduration=9.602391351 podStartE2EDuration="9.602391351s" podCreationTimestamp="2026-01-28 11:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.596026828 +0000 UTC m=+1142.390906812" watchObservedRunningTime="2026-01-28 11:41:06.602391351 +0000 UTC m=+1142.397271335" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.602755 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a291-account-create-update-dlt8t" event={"ID":"57723f90-020a-42b7-ad6c-49e998417f27","Type":"ContainerStarted","Data":"eb8aeef081bed9fc3291d5cfeded1565dd1b1b9b2083d0292898d1582434080f"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.602816 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a291-account-create-update-dlt8t" event={"ID":"57723f90-020a-42b7-ad6c-49e998417f27","Type":"ContainerStarted","Data":"0d5a060f43163338a8ede6064d4710fa13e9db3e35b949179a6e65fb27dffc89"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.606647 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-753f-account-create-update-2x2r6" event={"ID":"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a","Type":"ContainerStarted","Data":"17b7bc7812de15b0ba6dad22d3ba3bb61255869891da2c8a992a0d46bd5333d8"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.606688 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-753f-account-create-update-2x2r6" event={"ID":"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a","Type":"ContainerStarted","Data":"924dc54cfa60a8f32123f92837e50932d9f12563881b5648d2cb23e671fafa38"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.608392 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-zc8nk" event={"ID":"bfa2fee1-7544-426c-8cbf-17e7a2b1693c","Type":"ContainerStarted","Data":"7fda529b88e3987c4ae4849b30d23d832271becc6d93ff3c2b38ec3458bd36e2"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.614457 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n6kfg" event={"ID":"04ea6e04-5420-4f5b-911f-cdaede8220ab","Type":"ContainerStarted","Data":"0e142e02c8a274046814a6325bfd4965bb106ee5efa7e215372b93e33be734e4"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.614498 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n6kfg" event={"ID":"04ea6e04-5420-4f5b-911f-cdaede8220ab","Type":"ContainerStarted","Data":"afaf02dd74d091d615efce3e75faa15a0eb668080b3859f1c4c081a5ec9ff9ff"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.618231 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jqlrv" event={"ID":"dc6a2a42-6519-46c6-bb24-074e5096001f","Type":"ContainerStarted","Data":"350f3ad47814ad13668216a271a72da43f7b115b973ca0e4f205bd9b83981f82"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.618281 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jqlrv" event={"ID":"dc6a2a42-6519-46c6-bb24-074e5096001f","Type":"ContainerStarted","Data":"39321d26e0256fe4a7dea0f7638803063c75fe53025761d0a61456609991a4b1"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.628527 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kcr62" event={"ID":"518f34a2-84c4-4115-a28d-0251d0fa8064","Type":"ContainerStarted","Data":"90654b28f7b1bc46ccc040db22917c371a0f4ddcc12c4c2ea186a6c9f6f7e0b1"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.628576 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kcr62" event={"ID":"518f34a2-84c4-4115-a28d-0251d0fa8064","Type":"ContainerStarted","Data":"516d57eaaff9063dd68dbb3a8b9ed031a8ba9b58b4e79c35bb862291bdb42ced"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.646929 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" podStartSLOduration=10.646912888 podStartE2EDuration="10.646912888s" podCreationTimestamp="2026-01-28 11:40:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.623258434 +0000 UTC m=+1142.418138428" watchObservedRunningTime="2026-01-28 11:41:06.646912888 +0000 UTC m=+1142.441792872" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.651131 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xtdr8-config-zc8nk" podStartSLOduration=8.651114641 podStartE2EDuration="8.651114641s" podCreationTimestamp="2026-01-28 11:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.646241456 +0000 UTC m=+1142.441121440" watchObservedRunningTime="2026-01-28 11:41:06.651114641 +0000 UTC m=+1142.445994625" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.666081 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-jqlrv" podStartSLOduration=9.666063737 podStartE2EDuration="9.666063737s" podCreationTimestamp="2026-01-28 11:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.662361149 +0000 UTC m=+1142.457241143" watchObservedRunningTime="2026-01-28 11:41:06.666063737 +0000 UTC m=+1142.460943711" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.685046 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-a291-account-create-update-dlt8t" podStartSLOduration=8.68502747 podStartE2EDuration="8.68502747s" podCreationTimestamp="2026-01-28 11:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.680565999 +0000 UTC m=+1142.475445983" watchObservedRunningTime="2026-01-28 11:41:06.68502747 +0000 UTC m=+1142.479907464" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.713823 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-753f-account-create-update-2x2r6" podStartSLOduration=9.713806217 podStartE2EDuration="9.713806217s" podCreationTimestamp="2026-01-28 11:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.698557061 +0000 UTC m=+1142.493437055" watchObservedRunningTime="2026-01-28 11:41:06.713806217 +0000 UTC m=+1142.508686201" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.724067 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-n6kfg" podStartSLOduration=9.724049152 podStartE2EDuration="9.724049152s" podCreationTimestamp="2026-01-28 11:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.720781978 +0000 UTC m=+1142.515661952" watchObservedRunningTime="2026-01-28 11:41:06.724049152 +0000 UTC m=+1142.518929136" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.740097 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-kcr62" podStartSLOduration=8.740082543 podStartE2EDuration="8.740082543s" podCreationTimestamp="2026-01-28 11:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.739331879 +0000 UTC m=+1142.534211873" watchObservedRunningTime="2026-01-28 11:41:06.740082543 +0000 UTC m=+1142.534962527" Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.638429 4804 generic.go:334] "Generic (PLEG): container finished" podID="04ea6e04-5420-4f5b-911f-cdaede8220ab" containerID="0e142e02c8a274046814a6325bfd4965bb106ee5efa7e215372b93e33be734e4" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.638564 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n6kfg" event={"ID":"04ea6e04-5420-4f5b-911f-cdaede8220ab","Type":"ContainerDied","Data":"0e142e02c8a274046814a6325bfd4965bb106ee5efa7e215372b93e33be734e4"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.640608 4804 generic.go:334] "Generic (PLEG): container finished" podID="57723f90-020a-42b7-ad6c-49e998417f27" containerID="eb8aeef081bed9fc3291d5cfeded1565dd1b1b9b2083d0292898d1582434080f" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.640670 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a291-account-create-update-dlt8t" event={"ID":"57723f90-020a-42b7-ad6c-49e998417f27","Type":"ContainerDied","Data":"eb8aeef081bed9fc3291d5cfeded1565dd1b1b9b2083d0292898d1582434080f"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.642078 4804 generic.go:334] "Generic (PLEG): container finished" podID="dc6a2a42-6519-46c6-bb24-074e5096001f" containerID="350f3ad47814ad13668216a271a72da43f7b115b973ca0e4f205bd9b83981f82" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.642132 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jqlrv" event={"ID":"dc6a2a42-6519-46c6-bb24-074e5096001f","Type":"ContainerDied","Data":"350f3ad47814ad13668216a271a72da43f7b115b973ca0e4f205bd9b83981f82"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.643956 4804 generic.go:334] "Generic (PLEG): container finished" podID="518f34a2-84c4-4115-a28d-0251d0fa8064" containerID="90654b28f7b1bc46ccc040db22917c371a0f4ddcc12c4c2ea186a6c9f6f7e0b1" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.644007 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kcr62" event={"ID":"518f34a2-84c4-4115-a28d-0251d0fa8064","Type":"ContainerDied","Data":"90654b28f7b1bc46ccc040db22917c371a0f4ddcc12c4c2ea186a6c9f6f7e0b1"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.647732 4804 generic.go:334] "Generic (PLEG): container finished" podID="ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" containerID="17b7bc7812de15b0ba6dad22d3ba3bb61255869891da2c8a992a0d46bd5333d8" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.647807 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-753f-account-create-update-2x2r6" event={"ID":"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a","Type":"ContainerDied","Data":"17b7bc7812de15b0ba6dad22d3ba3bb61255869891da2c8a992a0d46bd5333d8"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.649803 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bnpvd" event={"ID":"d5916f11-436f-46f9-b76e-304aa86f91a1","Type":"ContainerStarted","Data":"630d245e2b53140749f6a43e742aa23a22cf07e20dff45a1938f861c8866cefa"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.656276 4804 generic.go:334] "Generic (PLEG): container finished" podID="12849043-1f8e-4d1f-aae3-9cbc35ea4361" containerID="6ce17aece748b9da79e3085fe6d476a5deab47316ec4672ba0cbe650d2deca37" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.656390 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8522-account-create-update-rlttq" event={"ID":"12849043-1f8e-4d1f-aae3-9cbc35ea4361","Type":"ContainerDied","Data":"6ce17aece748b9da79e3085fe6d476a5deab47316ec4672ba0cbe650d2deca37"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.661176 4804 generic.go:334] "Generic (PLEG): container finished" podID="bfa2fee1-7544-426c-8cbf-17e7a2b1693c" containerID="91137eb6aeea940f4af2b3e77f249fa514f8d6f12484bb39c0b7af92b6cead6f" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.661858 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-zc8nk" event={"ID":"bfa2fee1-7544-426c-8cbf-17e7a2b1693c","Type":"ContainerDied","Data":"91137eb6aeea940f4af2b3e77f249fa514f8d6f12484bb39c0b7af92b6cead6f"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.769682 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bnpvd" podStartSLOduration=2.9189039660000002 podStartE2EDuration="18.769664472s" podCreationTimestamp="2026-01-28 11:40:49 +0000 UTC" firstStartedPulling="2026-01-28 11:40:50.321062581 +0000 UTC m=+1126.115942565" lastFinishedPulling="2026-01-28 11:41:06.171823087 +0000 UTC m=+1141.966703071" observedRunningTime="2026-01-28 11:41:07.768472254 +0000 UTC m=+1143.563352238" watchObservedRunningTime="2026-01-28 11:41:07.769664472 +0000 UTC m=+1143.564544456" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.097786 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.156183 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-smhkb"] Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.156416 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerName="dnsmasq-dns" containerID="cri-o://a48f67c8adf2aa181768d9a9401b24e93ffd4b8affd530951dafed718efcc454" gracePeriod=10 Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.395392 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n6kfg" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.402779 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jqlrv" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.406898 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.413009 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.440067 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kcr62" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.460258 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.460616 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544649 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6ct4\" (UniqueName: \"kubernetes.io/projected/518f34a2-84c4-4115-a28d-0251d0fa8064-kube-api-access-m6ct4\") pod \"518f34a2-84c4-4115-a28d-0251d0fa8064\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544702 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-additional-scripts\") pod \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544730 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f34a2-84c4-4115-a28d-0251d0fa8064-operator-scripts\") pod \"518f34a2-84c4-4115-a28d-0251d0fa8064\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544792 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zqzg\" (UniqueName: \"kubernetes.io/projected/12849043-1f8e-4d1f-aae3-9cbc35ea4361-kube-api-access-8zqzg\") pod \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544855 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqx6s\" (UniqueName: \"kubernetes.io/projected/04ea6e04-5420-4f5b-911f-cdaede8220ab-kube-api-access-wqx6s\") pod \"04ea6e04-5420-4f5b-911f-cdaede8220ab\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544892 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-scripts\") pod \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544929 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-log-ovn\") pod \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544949 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run-ovn\") pod \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545007 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9lkd\" (UniqueName: \"kubernetes.io/projected/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-kube-api-access-p9lkd\") pod \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545024 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12849043-1f8e-4d1f-aae3-9cbc35ea4361-operator-scripts\") pod \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545057 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc6a2a42-6519-46c6-bb24-074e5096001f-operator-scripts\") pod \"dc6a2a42-6519-46c6-bb24-074e5096001f\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545092 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7qzm\" (UniqueName: \"kubernetes.io/projected/dc6a2a42-6519-46c6-bb24-074e5096001f-kube-api-access-b7qzm\") pod \"dc6a2a42-6519-46c6-bb24-074e5096001f\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545117 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ea6e04-5420-4f5b-911f-cdaede8220ab-operator-scripts\") pod \"04ea6e04-5420-4f5b-911f-cdaede8220ab\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545143 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run\") pod \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545430 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bfa2fee1-7544-426c-8cbf-17e7a2b1693c" (UID: "bfa2fee1-7544-426c-8cbf-17e7a2b1693c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545498 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run" (OuterVolumeSpecName: "var-run") pod "bfa2fee1-7544-426c-8cbf-17e7a2b1693c" (UID: "bfa2fee1-7544-426c-8cbf-17e7a2b1693c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545559 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bfa2fee1-7544-426c-8cbf-17e7a2b1693c" (UID: "bfa2fee1-7544-426c-8cbf-17e7a2b1693c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.546745 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12849043-1f8e-4d1f-aae3-9cbc35ea4361-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12849043-1f8e-4d1f-aae3-9cbc35ea4361" (UID: "12849043-1f8e-4d1f-aae3-9cbc35ea4361"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.547438 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc6a2a42-6519-46c6-bb24-074e5096001f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc6a2a42-6519-46c6-bb24-074e5096001f" (UID: "dc6a2a42-6519-46c6-bb24-074e5096001f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.548073 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bfa2fee1-7544-426c-8cbf-17e7a2b1693c" (UID: "bfa2fee1-7544-426c-8cbf-17e7a2b1693c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.553016 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/518f34a2-84c4-4115-a28d-0251d0fa8064-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "518f34a2-84c4-4115-a28d-0251d0fa8064" (UID: "518f34a2-84c4-4115-a28d-0251d0fa8064"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.553754 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-scripts" (OuterVolumeSpecName: "scripts") pod "bfa2fee1-7544-426c-8cbf-17e7a2b1693c" (UID: "bfa2fee1-7544-426c-8cbf-17e7a2b1693c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.554722 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ea6e04-5420-4f5b-911f-cdaede8220ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04ea6e04-5420-4f5b-911f-cdaede8220ab" (UID: "04ea6e04-5420-4f5b-911f-cdaede8220ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.558997 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518f34a2-84c4-4115-a28d-0251d0fa8064-kube-api-access-m6ct4" (OuterVolumeSpecName: "kube-api-access-m6ct4") pod "518f34a2-84c4-4115-a28d-0251d0fa8064" (UID: "518f34a2-84c4-4115-a28d-0251d0fa8064"). InnerVolumeSpecName "kube-api-access-m6ct4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.574070 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ea6e04-5420-4f5b-911f-cdaede8220ab-kube-api-access-wqx6s" (OuterVolumeSpecName: "kube-api-access-wqx6s") pod "04ea6e04-5420-4f5b-911f-cdaede8220ab" (UID: "04ea6e04-5420-4f5b-911f-cdaede8220ab"). InnerVolumeSpecName "kube-api-access-wqx6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.581557 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc6a2a42-6519-46c6-bb24-074e5096001f-kube-api-access-b7qzm" (OuterVolumeSpecName: "kube-api-access-b7qzm") pod "dc6a2a42-6519-46c6-bb24-074e5096001f" (UID: "dc6a2a42-6519-46c6-bb24-074e5096001f"). InnerVolumeSpecName "kube-api-access-b7qzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.581911 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-kube-api-access-p9lkd" (OuterVolumeSpecName: "kube-api-access-p9lkd") pod "bfa2fee1-7544-426c-8cbf-17e7a2b1693c" (UID: "bfa2fee1-7544-426c-8cbf-17e7a2b1693c"). InnerVolumeSpecName "kube-api-access-p9lkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.582417 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.582531 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.582644 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.582954 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12849043-1f8e-4d1f-aae3-9cbc35ea4361-kube-api-access-8zqzg" (OuterVolumeSpecName: "kube-api-access-8zqzg") pod "12849043-1f8e-4d1f-aae3-9cbc35ea4361" (UID: "12849043-1f8e-4d1f-aae3-9cbc35ea4361"). InnerVolumeSpecName "kube-api-access-8zqzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.584249 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed6af6b086af0e36078ceaad545a02650a81d6b24e2afd021938bf20fba0d1ad"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.584412 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://ed6af6b086af0e36078ceaad545a02650a81d6b24e2afd021938bf20fba0d1ad" gracePeriod=600 Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.647422 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57723f90-020a-42b7-ad6c-49e998417f27-operator-scripts\") pod \"57723f90-020a-42b7-ad6c-49e998417f27\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.647531 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxhhb\" (UniqueName: \"kubernetes.io/projected/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-kube-api-access-lxhhb\") pod \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.647622 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-operator-scripts\") pod \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.647755 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8jcp\" (UniqueName: \"kubernetes.io/projected/57723f90-020a-42b7-ad6c-49e998417f27-kube-api-access-n8jcp\") pod \"57723f90-020a-42b7-ad6c-49e998417f27\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.648446 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc6a2a42-6519-46c6-bb24-074e5096001f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.648613 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7qzm\" (UniqueName: \"kubernetes.io/projected/dc6a2a42-6519-46c6-bb24-074e5096001f-kube-api-access-b7qzm\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.648730 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ea6e04-5420-4f5b-911f-cdaede8220ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.648826 4804 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.648940 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6ct4\" (UniqueName: \"kubernetes.io/projected/518f34a2-84c4-4115-a28d-0251d0fa8064-kube-api-access-m6ct4\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649065 4804 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649151 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f34a2-84c4-4115-a28d-0251d0fa8064-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649239 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zqzg\" (UniqueName: \"kubernetes.io/projected/12849043-1f8e-4d1f-aae3-9cbc35ea4361-kube-api-access-8zqzg\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649337 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqx6s\" (UniqueName: \"kubernetes.io/projected/04ea6e04-5420-4f5b-911f-cdaede8220ab-kube-api-access-wqx6s\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649444 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649550 4804 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649640 4804 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649715 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9lkd\" (UniqueName: \"kubernetes.io/projected/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-kube-api-access-p9lkd\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649810 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12849043-1f8e-4d1f-aae3-9cbc35ea4361-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.650270 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" (UID: "ba69153d-cb1a-4a90-b52a-19ecc0f5b77a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.651131 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57723f90-020a-42b7-ad6c-49e998417f27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57723f90-020a-42b7-ad6c-49e998417f27" (UID: "57723f90-020a-42b7-ad6c-49e998417f27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.659113 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-kube-api-access-lxhhb" (OuterVolumeSpecName: "kube-api-access-lxhhb") pod "ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" (UID: "ba69153d-cb1a-4a90-b52a-19ecc0f5b77a"). InnerVolumeSpecName "kube-api-access-lxhhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.665145 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57723f90-020a-42b7-ad6c-49e998417f27-kube-api-access-n8jcp" (OuterVolumeSpecName: "kube-api-access-n8jcp") pod "57723f90-020a-42b7-ad6c-49e998417f27" (UID: "57723f90-020a-42b7-ad6c-49e998417f27"). InnerVolumeSpecName "kube-api-access-n8jcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.723324 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-753f-account-create-update-2x2r6" event={"ID":"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a","Type":"ContainerDied","Data":"924dc54cfa60a8f32123f92837e50932d9f12563881b5648d2cb23e671fafa38"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.723969 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="924dc54cfa60a8f32123f92837e50932d9f12563881b5648d2cb23e671fafa38" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.723545 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.725077 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8522-account-create-update-rlttq" event={"ID":"12849043-1f8e-4d1f-aae3-9cbc35ea4361","Type":"ContainerDied","Data":"550e26edd6ee8229306ffc708faae50e44129550e0ff20f7f29fa20dce60c760"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.725112 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="550e26edd6ee8229306ffc708faae50e44129550e0ff20f7f29fa20dce60c760" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.725167 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.733237 4804 generic.go:334] "Generic (PLEG): container finished" podID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerID="a48f67c8adf2aa181768d9a9401b24e93ffd4b8affd530951dafed718efcc454" exitCode=0 Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.733295 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" event={"ID":"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1","Type":"ContainerDied","Data":"a48f67c8adf2aa181768d9a9401b24e93ffd4b8affd530951dafed718efcc454"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.734684 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-zc8nk" event={"ID":"bfa2fee1-7544-426c-8cbf-17e7a2b1693c","Type":"ContainerDied","Data":"7fda529b88e3987c4ae4849b30d23d832271becc6d93ff3c2b38ec3458bd36e2"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.734704 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fda529b88e3987c4ae4849b30d23d832271becc6d93ff3c2b38ec3458bd36e2" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.734709 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.735764 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n6kfg" event={"ID":"04ea6e04-5420-4f5b-911f-cdaede8220ab","Type":"ContainerDied","Data":"afaf02dd74d091d615efce3e75faa15a0eb668080b3859f1c4c081a5ec9ff9ff"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.735809 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afaf02dd74d091d615efce3e75faa15a0eb668080b3859f1c4c081a5ec9ff9ff" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.735896 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n6kfg" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.751263 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8jcp\" (UniqueName: \"kubernetes.io/projected/57723f90-020a-42b7-ad6c-49e998417f27-kube-api-access-n8jcp\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.751287 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57723f90-020a-42b7-ad6c-49e998417f27-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.751301 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxhhb\" (UniqueName: \"kubernetes.io/projected/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-kube-api-access-lxhhb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.751311 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.753131 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a291-account-create-update-dlt8t" event={"ID":"57723f90-020a-42b7-ad6c-49e998417f27","Type":"ContainerDied","Data":"0d5a060f43163338a8ede6064d4710fa13e9db3e35b949179a6e65fb27dffc89"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.753155 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d5a060f43163338a8ede6064d4710fa13e9db3e35b949179a6e65fb27dffc89" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.753209 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.757443 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jqlrv" event={"ID":"dc6a2a42-6519-46c6-bb24-074e5096001f","Type":"ContainerDied","Data":"39321d26e0256fe4a7dea0f7638803063c75fe53025761d0a61456609991a4b1"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.757476 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39321d26e0256fe4a7dea0f7638803063c75fe53025761d0a61456609991a4b1" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.757542 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jqlrv" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.764856 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kcr62" event={"ID":"518f34a2-84c4-4115-a28d-0251d0fa8064","Type":"ContainerDied","Data":"516d57eaaff9063dd68dbb3a8b9ed031a8ba9b58b4e79c35bb862291bdb42ced"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.764902 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="516d57eaaff9063dd68dbb3a8b9ed031a8ba9b58b4e79c35bb862291bdb42ced" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.764953 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kcr62" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.974685 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.158833 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-nb\") pod \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.159200 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-config\") pod \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.159959 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-sb\") pod \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.160098 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-dns-svc\") pod \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.160325 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzbk7\" (UniqueName: \"kubernetes.io/projected/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-kube-api-access-xzbk7\") pod \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.171218 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-kube-api-access-xzbk7" (OuterVolumeSpecName: "kube-api-access-xzbk7") pod "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" (UID: "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1"). InnerVolumeSpecName "kube-api-access-xzbk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.228307 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" (UID: "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.230111 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" (UID: "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.232398 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-config" (OuterVolumeSpecName: "config") pod "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" (UID: "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.234487 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" (UID: "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.268965 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzbk7\" (UniqueName: \"kubernetes.io/projected/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-kube-api-access-xzbk7\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.269001 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.269014 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.269024 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.269035 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.554615 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xtdr8-config-zc8nk"] Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.564359 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xtdr8-config-zc8nk"] Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.774216 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" event={"ID":"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1","Type":"ContainerDied","Data":"99851c0d89d123f60d87fe5e7b4fa11b90a206a967c2a2ccd24c03d723ee66ce"} Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.774232 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.774569 4804 scope.go:117] "RemoveContainer" containerID="a48f67c8adf2aa181768d9a9401b24e93ffd4b8affd530951dafed718efcc454" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.775981 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5r69w" event={"ID":"79faecc7-1388-420a-9eee-b47d0ce87f34","Type":"ContainerStarted","Data":"9ebbd370fba6d4ae4e403a102d6071f40119646995ef2452c9e5a36cd8033a5d"} Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.780062 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="ed6af6b086af0e36078ceaad545a02650a81d6b24e2afd021938bf20fba0d1ad" exitCode=0 Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.780100 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"ed6af6b086af0e36078ceaad545a02650a81d6b24e2afd021938bf20fba0d1ad"} Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.780311 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"4bffdd4d5a4ad0d46a47b95458a7c8bdaf05a4c4019b6b412dce10eb63d37e95"} Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.794220 4804 scope.go:117] "RemoveContainer" containerID="cc41ce863945bdc29f63769a99ae0d6dadc7d7ef12a25abcef8a64fe330fdd73" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.801658 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-5r69w" podStartSLOduration=9.740742653 podStartE2EDuration="16.801641117s" podCreationTimestamp="2026-01-28 11:40:57 +0000 UTC" firstStartedPulling="2026-01-28 11:41:06.026926054 +0000 UTC m=+1141.821806038" lastFinishedPulling="2026-01-28 11:41:13.087824518 +0000 UTC m=+1148.882704502" observedRunningTime="2026-01-28 11:41:13.795752639 +0000 UTC m=+1149.590632623" watchObservedRunningTime="2026-01-28 11:41:13.801641117 +0000 UTC m=+1149.596521101" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.819154 4804 scope.go:117] "RemoveContainer" containerID="e2d1117c737baf6cd27ef1229c3435bfc59febfb941c2b84b434e736df46abc8" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.824802 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-smhkb"] Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.830839 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-smhkb"] Jan 28 11:41:14 crc kubenswrapper[4804]: I0128 11:41:14.927035 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" path="/var/lib/kubelet/pods/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1/volumes" Jan 28 11:41:14 crc kubenswrapper[4804]: I0128 11:41:14.928632 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa2fee1-7544-426c-8cbf-17e7a2b1693c" path="/var/lib/kubelet/pods/bfa2fee1-7544-426c-8cbf-17e7a2b1693c/volumes" Jan 28 11:41:17 crc kubenswrapper[4804]: I0128 11:41:17.826472 4804 generic.go:334] "Generic (PLEG): container finished" podID="79faecc7-1388-420a-9eee-b47d0ce87f34" containerID="9ebbd370fba6d4ae4e403a102d6071f40119646995ef2452c9e5a36cd8033a5d" exitCode=0 Jan 28 11:41:17 crc kubenswrapper[4804]: I0128 11:41:17.826532 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5r69w" event={"ID":"79faecc7-1388-420a-9eee-b47d0ce87f34","Type":"ContainerDied","Data":"9ebbd370fba6d4ae4e403a102d6071f40119646995ef2452c9e5a36cd8033a5d"} Jan 28 11:41:18 crc kubenswrapper[4804]: I0128 11:41:18.836997 4804 generic.go:334] "Generic (PLEG): container finished" podID="d5916f11-436f-46f9-b76e-304aa86f91a1" containerID="630d245e2b53140749f6a43e742aa23a22cf07e20dff45a1938f861c8866cefa" exitCode=0 Jan 28 11:41:18 crc kubenswrapper[4804]: I0128 11:41:18.837084 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bnpvd" event={"ID":"d5916f11-436f-46f9-b76e-304aa86f91a1","Type":"ContainerDied","Data":"630d245e2b53140749f6a43e742aa23a22cf07e20dff45a1938f861c8866cefa"} Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.133686 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5r69w" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.273002 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-config-data\") pod \"79faecc7-1388-420a-9eee-b47d0ce87f34\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.273193 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htpjf\" (UniqueName: \"kubernetes.io/projected/79faecc7-1388-420a-9eee-b47d0ce87f34-kube-api-access-htpjf\") pod \"79faecc7-1388-420a-9eee-b47d0ce87f34\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.273267 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-combined-ca-bundle\") pod \"79faecc7-1388-420a-9eee-b47d0ce87f34\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.293093 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79faecc7-1388-420a-9eee-b47d0ce87f34-kube-api-access-htpjf" (OuterVolumeSpecName: "kube-api-access-htpjf") pod "79faecc7-1388-420a-9eee-b47d0ce87f34" (UID: "79faecc7-1388-420a-9eee-b47d0ce87f34"). InnerVolumeSpecName "kube-api-access-htpjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.309569 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79faecc7-1388-420a-9eee-b47d0ce87f34" (UID: "79faecc7-1388-420a-9eee-b47d0ce87f34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.325390 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-config-data" (OuterVolumeSpecName: "config-data") pod "79faecc7-1388-420a-9eee-b47d0ce87f34" (UID: "79faecc7-1388-420a-9eee-b47d0ce87f34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.375294 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htpjf\" (UniqueName: \"kubernetes.io/projected/79faecc7-1388-420a-9eee-b47d0ce87f34-kube-api-access-htpjf\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.375554 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.375632 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.846800 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5r69w" event={"ID":"79faecc7-1388-420a-9eee-b47d0ce87f34","Type":"ContainerDied","Data":"e1f18bcac4c6c36a072d92d374a23648dd2943d7e2139948bf7101c4c1a6cff4"} Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.846852 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1f18bcac4c6c36a072d92d374a23648dd2943d7e2139948bf7101c4c1a6cff4" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.846922 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5r69w" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.179810 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-bxbzj"] Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180238 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6a2a42-6519-46c6-bb24-074e5096001f" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180250 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6a2a42-6519-46c6-bb24-074e5096001f" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180260 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ea6e04-5420-4f5b-911f-cdaede8220ab" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180266 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ea6e04-5420-4f5b-911f-cdaede8220ab" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180277 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerName="dnsmasq-dns" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180283 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerName="dnsmasq-dns" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180291 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12849043-1f8e-4d1f-aae3-9cbc35ea4361" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180297 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="12849043-1f8e-4d1f-aae3-9cbc35ea4361" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180310 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerName="init" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180315 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerName="init" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180326 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518f34a2-84c4-4115-a28d-0251d0fa8064" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180332 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="518f34a2-84c4-4115-a28d-0251d0fa8064" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180342 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180348 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180368 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79faecc7-1388-420a-9eee-b47d0ce87f34" containerName="keystone-db-sync" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180374 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="79faecc7-1388-420a-9eee-b47d0ce87f34" containerName="keystone-db-sync" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180382 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57723f90-020a-42b7-ad6c-49e998417f27" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180387 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="57723f90-020a-42b7-ad6c-49e998417f27" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180399 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa2fee1-7544-426c-8cbf-17e7a2b1693c" containerName="ovn-config" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180405 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa2fee1-7544-426c-8cbf-17e7a2b1693c" containerName="ovn-config" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180595 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="518f34a2-84c4-4115-a28d-0251d0fa8064" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180604 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="79faecc7-1388-420a-9eee-b47d0ce87f34" containerName="keystone-db-sync" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180611 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180627 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa2fee1-7544-426c-8cbf-17e7a2b1693c" containerName="ovn-config" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180637 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerName="dnsmasq-dns" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180647 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6a2a42-6519-46c6-bb24-074e5096001f" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180656 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="12849043-1f8e-4d1f-aae3-9cbc35ea4361" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180665 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="57723f90-020a-42b7-ad6c-49e998417f27" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180673 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ea6e04-5420-4f5b-911f-cdaede8220ab" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.182055 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.201328 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-bxbzj"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.219815 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gczh7"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.221092 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.226723 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.227049 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.227197 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.227431 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xcgbx" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.227631 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.241868 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gczh7"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.308846 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.308933 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzbzs\" (UniqueName: \"kubernetes.io/projected/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-kube-api-access-pzbzs\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.308975 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.308995 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-config\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.309063 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.309092 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.336565 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bnpvd" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.365642 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2swjk"] Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.366058 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5916f11-436f-46f9-b76e-304aa86f91a1" containerName="glance-db-sync" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.366073 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5916f11-436f-46f9-b76e-304aa86f91a1" containerName="glance-db-sync" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.366237 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5916f11-436f-46f9-b76e-304aa86f91a1" containerName="glance-db-sync" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.366757 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.377252 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.377530 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.377664 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-p4q8k" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.378174 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2swjk"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412129 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-db-sync-config-data\") pod \"d5916f11-436f-46f9-b76e-304aa86f91a1\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412462 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wn8b\" (UniqueName: \"kubernetes.io/projected/d5916f11-436f-46f9-b76e-304aa86f91a1-kube-api-access-9wn8b\") pod \"d5916f11-436f-46f9-b76e-304aa86f91a1\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412592 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-credential-keys\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412618 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzbzs\" (UniqueName: \"kubernetes.io/projected/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-kube-api-access-pzbzs\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412635 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-db-sync-config-data\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412663 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412679 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-config\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412701 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-config-data\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412731 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412753 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412773 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-combined-ca-bundle\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412794 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-config-data\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412809 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-combined-ca-bundle\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412842 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v9bm\" (UniqueName: \"kubernetes.io/projected/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-kube-api-access-5v9bm\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412857 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-fernet-keys\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412872 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-etc-machine-id\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412912 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-scripts\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412942 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwcqn\" (UniqueName: \"kubernetes.io/projected/6dc73391-67e1-4f78-9531-509bcf54be36-kube-api-access-cwcqn\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412963 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412986 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-scripts\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.414025 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.414817 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.415288 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.415443 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.415987 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-config\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.416167 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.420036 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.423636 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d5916f11-436f-46f9-b76e-304aa86f91a1" (UID: "d5916f11-436f-46f9-b76e-304aa86f91a1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.423729 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5916f11-436f-46f9-b76e-304aa86f91a1-kube-api-access-9wn8b" (OuterVolumeSpecName: "kube-api-access-9wn8b") pod "d5916f11-436f-46f9-b76e-304aa86f91a1" (UID: "d5916f11-436f-46f9-b76e-304aa86f91a1"). InnerVolumeSpecName "kube-api-access-9wn8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.425411 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.425681 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.431829 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.450922 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-b679z"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.451944 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.454816 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.454976 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.455167 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pl59s" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.461907 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b679z"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.468969 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzbzs\" (UniqueName: \"kubernetes.io/projected/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-kube-api-access-pzbzs\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.489530 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-wch49"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.491309 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.494858 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.495414 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-682gl" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.495557 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.514722 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-combined-ca-bundle\") pod \"d5916f11-436f-46f9-b76e-304aa86f91a1\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.514819 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-config-data\") pod \"d5916f11-436f-46f9-b76e-304aa86f91a1\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515205 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-scripts\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515235 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-credential-keys\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515256 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-db-sync-config-data\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515291 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-config-data\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515330 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-combined-ca-bundle\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515351 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-config-data\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515370 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-combined-ca-bundle\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515406 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v9bm\" (UniqueName: \"kubernetes.io/projected/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-kube-api-access-5v9bm\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515422 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-fernet-keys\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515439 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-etc-machine-id\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515459 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-scripts\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515490 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwcqn\" (UniqueName: \"kubernetes.io/projected/6dc73391-67e1-4f78-9531-509bcf54be36-kube-api-access-cwcqn\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515544 4804 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515560 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wn8b\" (UniqueName: \"kubernetes.io/projected/d5916f11-436f-46f9-b76e-304aa86f91a1-kube-api-access-9wn8b\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.519407 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-etc-machine-id\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.519571 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.522133 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wch49"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.543939 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-config-data\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.553080 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-combined-ca-bundle\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.554424 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-db-sync-config-data\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.554457 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-fernet-keys\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.555249 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-config-data\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.555839 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-combined-ca-bundle\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.559254 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-scripts\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.559385 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-credential-keys\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.559632 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-scripts\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.560719 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v9bm\" (UniqueName: \"kubernetes.io/projected/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-kube-api-access-5v9bm\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.565188 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwcqn\" (UniqueName: \"kubernetes.io/projected/6dc73391-67e1-4f78-9531-509bcf54be36-kube-api-access-cwcqn\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.588043 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-9brzz"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.595973 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.597691 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rvw8m" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.598010 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616685 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616726 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44xv5\" (UniqueName: \"kubernetes.io/projected/559981d5-7d2e-4624-a425-53ff3158840a-kube-api-access-44xv5\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616759 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-combined-ca-bundle\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616783 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-scripts\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616805 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-run-httpd\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616838 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b292a47-f331-472d-941e-193e41fee49f-logs\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616857 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-combined-ca-bundle\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616898 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t5gm\" (UniqueName: \"kubernetes.io/projected/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-kube-api-access-4t5gm\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616914 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-config\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616931 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-config-data\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616948 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-config-data\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616967 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjk67\" (UniqueName: \"kubernetes.io/projected/6b292a47-f331-472d-941e-193e41fee49f-kube-api-access-cjk67\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616982 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.617002 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-scripts\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.617019 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-log-httpd\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.628351 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-bxbzj"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.631476 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.638055 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5916f11-436f-46f9-b76e-304aa86f91a1" (UID: "d5916f11-436f-46f9-b76e-304aa86f91a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.667004 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9brzz"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.684093 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-d6b7l"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.685715 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.686730 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.691792 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-config-data" (OuterVolumeSpecName: "config-data") pod "d5916f11-436f-46f9-b76e-304aa86f91a1" (UID: "d5916f11-436f-46f9-b76e-304aa86f91a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.703051 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-d6b7l"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721318 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjk67\" (UniqueName: \"kubernetes.io/projected/6b292a47-f331-472d-941e-193e41fee49f-kube-api-access-cjk67\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721353 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721379 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-scripts\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721397 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-log-httpd\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721418 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-combined-ca-bundle\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721445 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721464 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44xv5\" (UniqueName: \"kubernetes.io/projected/559981d5-7d2e-4624-a425-53ff3158840a-kube-api-access-44xv5\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721484 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mth7\" (UniqueName: \"kubernetes.io/projected/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-kube-api-access-5mth7\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721503 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-db-sync-config-data\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721531 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-combined-ca-bundle\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721554 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-scripts\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721580 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-run-httpd\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721617 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b292a47-f331-472d-941e-193e41fee49f-logs\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721637 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-combined-ca-bundle\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721667 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t5gm\" (UniqueName: \"kubernetes.io/projected/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-kube-api-access-4t5gm\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721681 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-config\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721697 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-config-data\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721716 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-config-data\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721755 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721767 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.726462 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-config-data\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.726714 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.729322 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-run-httpd\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.729404 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b292a47-f331-472d-941e-193e41fee49f-logs\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.730716 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-config\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.730901 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-combined-ca-bundle\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.731146 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-log-httpd\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.732694 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-combined-ca-bundle\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.743303 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-config-data\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.743303 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-scripts\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.744575 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.745381 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-scripts\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.745754 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjk67\" (UniqueName: \"kubernetes.io/projected/6b292a47-f331-472d-941e-193e41fee49f-kube-api-access-cjk67\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.746841 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t5gm\" (UniqueName: \"kubernetes.io/projected/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-kube-api-access-4t5gm\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.751754 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44xv5\" (UniqueName: \"kubernetes.io/projected/559981d5-7d2e-4624-a425-53ff3158840a-kube-api-access-44xv5\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823362 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mth7\" (UniqueName: \"kubernetes.io/projected/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-kube-api-access-5mth7\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823420 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823451 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-db-sync-config-data\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823522 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-config\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823560 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823609 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rmjx\" (UniqueName: \"kubernetes.io/projected/f59b13cd-bec2-4590-a661-0cf416b68290-kube-api-access-9rmjx\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823677 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823747 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-combined-ca-bundle\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823770 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.835193 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-db-sync-config-data\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.843767 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mth7\" (UniqueName: \"kubernetes.io/projected/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-kube-api-access-5mth7\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.845260 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-combined-ca-bundle\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.893399 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.894642 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bnpvd" event={"ID":"d5916f11-436f-46f9-b76e-304aa86f91a1","Type":"ContainerDied","Data":"98e4e548f770aa987da379b1ee8df638450d9e9a8748002b4fc5eb02b710f97e"} Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.895081 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98e4e548f770aa987da379b1ee8df638450d9e9a8748002b4fc5eb02b710f97e" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.895156 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bnpvd" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.921587 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.925028 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rmjx\" (UniqueName: \"kubernetes.io/projected/f59b13cd-bec2-4590-a661-0cf416b68290-kube-api-access-9rmjx\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.925110 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.925162 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.925194 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.925236 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-config\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.925259 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.927571 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.928598 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.929507 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.930234 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-config\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.930300 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.932710 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.945682 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.974633 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rmjx\" (UniqueName: \"kubernetes.io/projected/f59b13cd-bec2-4590-a661-0cf416b68290-kube-api-access-9rmjx\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.021014 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.079115 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-bxbzj"] Jan 28 11:41:21 crc kubenswrapper[4804]: W0128 11:41:21.128623 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7fcfdff_464d_4f4a_b6f6_d5f864fb47e7.slice/crio-3afcc46c48464f655df90f98fc4d7ab253c6f64aa91baa5fb68bd13e31da18a6 WatchSource:0}: Error finding container 3afcc46c48464f655df90f98fc4d7ab253c6f64aa91baa5fb68bd13e31da18a6: Status 404 returned error can't find the container with id 3afcc46c48464f655df90f98fc4d7ab253c6f64aa91baa5fb68bd13e31da18a6 Jan 28 11:41:21 crc kubenswrapper[4804]: E0128 11:41:21.184874 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5916f11_436f_46f9_b76e_304aa86f91a1.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.313556 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-d6b7l"] Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.387409 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-85r5r"] Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.415962 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.431519 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-85r5r"] Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.468273 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gczh7"] Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.488601 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2swjk"] Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.540194 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k47b\" (UniqueName: \"kubernetes.io/projected/91b4be5e-0f8c-495e-869d-38a047276f33-kube-api-access-8k47b\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.540553 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.540745 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.540980 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-config\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.541127 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.541221 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.643355 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-config\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.643441 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.643485 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.643510 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k47b\" (UniqueName: \"kubernetes.io/projected/91b4be5e-0f8c-495e-869d-38a047276f33-kube-api-access-8k47b\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.643532 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.643584 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.644276 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-config\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.644349 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.646156 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.652079 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.652131 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.667016 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k47b\" (UniqueName: \"kubernetes.io/projected/91b4be5e-0f8c-495e-869d-38a047276f33-kube-api-access-8k47b\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.726049 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wch49"] Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.740630 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: W0128 11:41:21.769527 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b292a47_f331_472d_941e_193e41fee49f.slice/crio-4fc600c8f61a9b8ec6d1ffdf93634fa090aed774c9c2a83b4350fad5ad1161a7 WatchSource:0}: Error finding container 4fc600c8f61a9b8ec6d1ffdf93634fa090aed774c9c2a83b4350fad5ad1161a7: Status 404 returned error can't find the container with id 4fc600c8f61a9b8ec6d1ffdf93634fa090aed774c9c2a83b4350fad5ad1161a7 Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.881346 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.900559 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b679z"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.914443 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2swjk" event={"ID":"3bd4fedc-8940-48ad-b718-4fbb98e48bf0","Type":"ContainerStarted","Data":"cd5a1fb1b75f267a6c5725321d259dcf2acd5836e7aa0491855baf75e38ef9de"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.916553 4804 generic.go:334] "Generic (PLEG): container finished" podID="a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" containerID="4c1c923612c015c747b5107243c527ca1074cc2a7e9bd605f2d99365a036305a" exitCode=0 Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.916694 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" event={"ID":"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7","Type":"ContainerDied","Data":"4c1c923612c015c747b5107243c527ca1074cc2a7e9bd605f2d99365a036305a"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.916715 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" event={"ID":"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7","Type":"ContainerStarted","Data":"3afcc46c48464f655df90f98fc4d7ab253c6f64aa91baa5fb68bd13e31da18a6"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.922742 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gczh7" event={"ID":"6dc73391-67e1-4f78-9531-509bcf54be36","Type":"ContainerStarted","Data":"e0578f336cec25aad377224f179ea54ee5afd99b6a706cbe778740c4a7fd261d"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.922776 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gczh7" event={"ID":"6dc73391-67e1-4f78-9531-509bcf54be36","Type":"ContainerStarted","Data":"2f62a3c4a2cd081b1f832339c14d958fdc8b030abf65c3750cc0feb9582f280e"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.925057 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerStarted","Data":"fa6e1a12eec8f670dacaf476eeccb44cad0c7ce79723abf8463004426598a522"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.926681 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9brzz"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.926729 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wch49" event={"ID":"6b292a47-f331-472d-941e-193e41fee49f","Type":"ContainerStarted","Data":"4fc600c8f61a9b8ec6d1ffdf93634fa090aed774c9c2a83b4350fad5ad1161a7"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.936825 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-d6b7l"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.972003 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gczh7" podStartSLOduration=1.9719835209999999 podStartE2EDuration="1.971983521s" podCreationTimestamp="2026-01-28 11:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:21.958594526 +0000 UTC m=+1157.753474510" watchObservedRunningTime="2026-01-28 11:41:21.971983521 +0000 UTC m=+1157.766863505" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.360777 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.378539 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.388926 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.389959 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dv6zq" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.390209 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.409853 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.487919 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkbdt\" (UniqueName: \"kubernetes.io/projected/77990e19-1287-4a52-a755-927c3fc6f529-kube-api-access-rkbdt\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.487989 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.488047 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.488073 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-scripts\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.488143 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-logs\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.488179 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.488217 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-config-data\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.569384 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.571957 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.579549 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589447 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-logs\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589496 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589536 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-config-data\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589553 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkbdt\" (UniqueName: \"kubernetes.io/projected/77990e19-1287-4a52-a755-927c3fc6f529-kube-api-access-rkbdt\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589588 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589646 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589672 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-scripts\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.593937 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.596434 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.598230 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-logs\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.605990 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-scripts\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.618720 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.623226 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-config-data\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.628953 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: E0128 11:41:22.629927 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle glance kube-api-access-rkbdt], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="77990e19-1287-4a52-a755-927c3fc6f529" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.634103 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.643377 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkbdt\" (UniqueName: \"kubernetes.io/projected/77990e19-1287-4a52-a755-927c3fc6f529-kube-api-access-rkbdt\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.667915 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.678961 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704379 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704467 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704516 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2glh\" (UniqueName: \"kubernetes.io/projected/944195d2-3d17-4cc5-84b5-eb204312c37e-kube-api-access-h2glh\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704546 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704575 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704665 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704708 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-logs\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.805928 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.805983 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806013 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2glh\" (UniqueName: \"kubernetes.io/projected/944195d2-3d17-4cc5-84b5-eb204312c37e-kube-api-access-h2glh\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806031 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806045 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806107 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806137 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-logs\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806664 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-logs\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806673 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.807684 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.817192 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: E0128 11:41:22.818742 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle glance kube-api-access-h2glh scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="944195d2-3d17-4cc5-84b5-eb204312c37e" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.837028 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.837619 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.851621 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2glh\" (UniqueName: \"kubernetes.io/projected/944195d2-3d17-4cc5-84b5-eb204312c37e-kube-api-access-h2glh\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.876174 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.939033 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b679z" event={"ID":"e541b2a6-870f-4829-bdfc-ad3e4368ec0b","Type":"ContainerStarted","Data":"39f3d9fd533ba3d14095e02fb7f969a867f9aaeea3368bde1bf4f16b61454f75"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.939070 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b679z" event={"ID":"e541b2a6-870f-4829-bdfc-ad3e4368ec0b","Type":"ContainerStarted","Data":"60b8da9908ec3982ed55579ec364c45546383ec243c42fe055e29512244fd6d9"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.943186 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9brzz" event={"ID":"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf","Type":"ContainerStarted","Data":"17e0e19fde7a47cbcc9cf6fab97dc7b7cdb474a5ae0195fdbdcd149f07b46b07"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.952260 4804 generic.go:334] "Generic (PLEG): container finished" podID="f59b13cd-bec2-4590-a661-0cf416b68290" containerID="927a2ab272600c80f47b45172a165cef75c84adb4271faee004dacfbc0c99580" exitCode=0 Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.952360 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" event={"ID":"f59b13cd-bec2-4590-a661-0cf416b68290","Type":"ContainerDied","Data":"927a2ab272600c80f47b45172a165cef75c84adb4271faee004dacfbc0c99580"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.952387 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" event={"ID":"f59b13cd-bec2-4590-a661-0cf416b68290","Type":"ContainerStarted","Data":"51ff06357611f23f4eba2b45be00b86265a17d7ddf09ab8ad09dd74930e3724b"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.952780 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.952869 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.968821 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-b679z" podStartSLOduration=2.968794628 podStartE2EDuration="2.968794628s" podCreationTimestamp="2026-01-28 11:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:22.964701857 +0000 UTC m=+1158.759581841" watchObservedRunningTime="2026-01-28 11:41:22.968794628 +0000 UTC m=+1158.763674602" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.004109 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.004374 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.113464 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkbdt\" (UniqueName: \"kubernetes.io/projected/77990e19-1287-4a52-a755-927c3fc6f529-kube-api-access-rkbdt\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.113794 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.113840 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-combined-ca-bundle\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.113894 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-logs\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.113937 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-logs\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.113996 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-combined-ca-bundle\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114017 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-httpd-run\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114063 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-httpd-run\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114107 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114174 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-config-data\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114214 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-scripts\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114239 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2glh\" (UniqueName: \"kubernetes.io/projected/944195d2-3d17-4cc5-84b5-eb204312c37e-kube-api-access-h2glh\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114268 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-config-data\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114289 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-scripts\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.120240 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.121094 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.125927 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-logs" (OuterVolumeSpecName: "logs") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.127149 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-scripts" (OuterVolumeSpecName: "scripts") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.127802 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-config-data" (OuterVolumeSpecName: "config-data") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.133643 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-logs" (OuterVolumeSpecName: "logs") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.138223 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77990e19-1287-4a52-a755-927c3fc6f529-kube-api-access-rkbdt" (OuterVolumeSpecName: "kube-api-access-rkbdt") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "kube-api-access-rkbdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.140665 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.148281 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.149653 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944195d2-3d17-4cc5-84b5-eb204312c37e-kube-api-access-h2glh" (OuterVolumeSpecName: "kube-api-access-h2glh") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "kube-api-access-h2glh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.152924 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-scripts" (OuterVolumeSpecName: "scripts") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.154028 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.159272 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-config-data" (OuterVolumeSpecName: "config-data") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.160143 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.183106 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217602 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217643 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217657 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217668 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217704 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217714 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217724 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217769 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217781 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217792 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217802 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2glh\" (UniqueName: \"kubernetes.io/projected/944195d2-3d17-4cc5-84b5-eb204312c37e-kube-api-access-h2glh\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.235314 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.235344 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.235355 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkbdt\" (UniqueName: \"kubernetes.io/projected/77990e19-1287-4a52-a755-927c3fc6f529-kube-api-access-rkbdt\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.246329 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.267276 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.342023 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.342050 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.413946 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-85r5r"] Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.470632 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.477949 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.546748 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-nb\") pod \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548164 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-sb\") pod \"f59b13cd-bec2-4590-a661-0cf416b68290\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548271 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rmjx\" (UniqueName: \"kubernetes.io/projected/f59b13cd-bec2-4590-a661-0cf416b68290-kube-api-access-9rmjx\") pod \"f59b13cd-bec2-4590-a661-0cf416b68290\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548313 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-svc\") pod \"f59b13cd-bec2-4590-a661-0cf416b68290\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548341 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-swift-storage-0\") pod \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548366 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-config\") pod \"f59b13cd-bec2-4590-a661-0cf416b68290\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548440 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-svc\") pod \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548476 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzbzs\" (UniqueName: \"kubernetes.io/projected/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-kube-api-access-pzbzs\") pod \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548563 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-nb\") pod \"f59b13cd-bec2-4590-a661-0cf416b68290\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548599 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-sb\") pod \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548706 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-config\") pod \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548766 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-swift-storage-0\") pod \"f59b13cd-bec2-4590-a661-0cf416b68290\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.561075 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-kube-api-access-pzbzs" (OuterVolumeSpecName: "kube-api-access-pzbzs") pod "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" (UID: "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7"). InnerVolumeSpecName "kube-api-access-pzbzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.561709 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f59b13cd-bec2-4590-a661-0cf416b68290-kube-api-access-9rmjx" (OuterVolumeSpecName: "kube-api-access-9rmjx") pod "f59b13cd-bec2-4590-a661-0cf416b68290" (UID: "f59b13cd-bec2-4590-a661-0cf416b68290"). InnerVolumeSpecName "kube-api-access-9rmjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.577345 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f59b13cd-bec2-4590-a661-0cf416b68290" (UID: "f59b13cd-bec2-4590-a661-0cf416b68290"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.578030 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f59b13cd-bec2-4590-a661-0cf416b68290" (UID: "f59b13cd-bec2-4590-a661-0cf416b68290"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.583498 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" (UID: "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.588792 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-config" (OuterVolumeSpecName: "config") pod "f59b13cd-bec2-4590-a661-0cf416b68290" (UID: "f59b13cd-bec2-4590-a661-0cf416b68290"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.591543 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" (UID: "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.601859 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-config" (OuterVolumeSpecName: "config") pod "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" (UID: "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.606414 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" (UID: "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.618602 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f59b13cd-bec2-4590-a661-0cf416b68290" (UID: "f59b13cd-bec2-4590-a661-0cf416b68290"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.619719 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f59b13cd-bec2-4590-a661-0cf416b68290" (UID: "f59b13cd-bec2-4590-a661-0cf416b68290"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.629535 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" (UID: "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653124 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653158 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rmjx\" (UniqueName: \"kubernetes.io/projected/f59b13cd-bec2-4590-a661-0cf416b68290-kube-api-access-9rmjx\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653171 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653180 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653189 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653198 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653207 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzbzs\" (UniqueName: \"kubernetes.io/projected/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-kube-api-access-pzbzs\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653215 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653224 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653232 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653239 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653248 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.974204 4804 generic.go:334] "Generic (PLEG): container finished" podID="91b4be5e-0f8c-495e-869d-38a047276f33" containerID="653ef14818c2af14b35bf5c8eff2142bb2b6b74279ede6a70a0def4afe23f6e5" exitCode=0 Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.974276 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" event={"ID":"91b4be5e-0f8c-495e-869d-38a047276f33","Type":"ContainerDied","Data":"653ef14818c2af14b35bf5c8eff2142bb2b6b74279ede6a70a0def4afe23f6e5"} Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.974307 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" event={"ID":"91b4be5e-0f8c-495e-869d-38a047276f33","Type":"ContainerStarted","Data":"4daa812f368862258a1d55a15b7d75718ffc99b127c66500a75ea826f368eb02"} Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.978448 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" event={"ID":"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7","Type":"ContainerDied","Data":"3afcc46c48464f655df90f98fc4d7ab253c6f64aa91baa5fb68bd13e31da18a6"} Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.978507 4804 scope.go:117] "RemoveContainer" containerID="4c1c923612c015c747b5107243c527ca1074cc2a7e9bd605f2d99365a036305a" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.978620 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.988241 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" event={"ID":"f59b13cd-bec2-4590-a661-0cf416b68290","Type":"ContainerDied","Data":"51ff06357611f23f4eba2b45be00b86265a17d7ddf09ab8ad09dd74930e3724b"} Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.988380 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.988435 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.988480 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.018650 4804 scope.go:117] "RemoveContainer" containerID="927a2ab272600c80f47b45172a165cef75c84adb4271faee004dacfbc0c99580" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.099077 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-d6b7l"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.116728 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-d6b7l"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.142063 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.169676 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.180205 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: E0128 11:41:24.180603 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" containerName="init" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.180619 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" containerName="init" Jan 28 11:41:24 crc kubenswrapper[4804]: E0128 11:41:24.180654 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59b13cd-bec2-4590-a661-0cf416b68290" containerName="init" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.180659 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59b13cd-bec2-4590-a661-0cf416b68290" containerName="init" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.180811 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" containerName="init" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.180832 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f59b13cd-bec2-4590-a661-0cf416b68290" containerName="init" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.181800 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.189148 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dv6zq" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.189275 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.189520 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.208713 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-bxbzj"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.220946 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-bxbzj"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.227827 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.273675 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.276393 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.289936 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.291472 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293139 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pdb4\" (UniqueName: \"kubernetes.io/projected/395d12eb-6bd8-4dc2-a026-d37da116fa0d-kube-api-access-5pdb4\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293164 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293194 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293214 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293301 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293325 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293340 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-logs\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293483 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.294295 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.395803 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.396289 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.396372 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.396410 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.396693 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397476 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397569 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397625 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l645w\" (UniqueName: \"kubernetes.io/projected/df00e734-18a4-4614-b272-1d914b5e39ce-kube-api-access-l645w\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397652 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397761 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-logs\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397807 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397826 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-logs\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397905 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pdb4\" (UniqueName: \"kubernetes.io/projected/395d12eb-6bd8-4dc2-a026-d37da116fa0d-kube-api-access-5pdb4\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397936 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.398396 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.398790 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-logs\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.398849 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.401874 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.406600 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.410562 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.420870 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pdb4\" (UniqueName: \"kubernetes.io/projected/395d12eb-6bd8-4dc2-a026-d37da116fa0d-kube-api-access-5pdb4\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.457191 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500332 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500415 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500502 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500530 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500573 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500599 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l645w\" (UniqueName: \"kubernetes.io/projected/df00e734-18a4-4614-b272-1d914b5e39ce-kube-api-access-l645w\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500657 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-logs\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.501180 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.503167 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-logs\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.503248 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.505325 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.513042 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.517417 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.517759 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.518714 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l645w\" (UniqueName: \"kubernetes.io/projected/df00e734-18a4-4614-b272-1d914b5e39ce-kube-api-access-l645w\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.533019 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.702118 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.929536 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77990e19-1287-4a52-a755-927c3fc6f529" path="/var/lib/kubelet/pods/77990e19-1287-4a52-a755-927c3fc6f529/volumes" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.930330 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944195d2-3d17-4cc5-84b5-eb204312c37e" path="/var/lib/kubelet/pods/944195d2-3d17-4cc5-84b5-eb204312c37e/volumes" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.930762 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" path="/var/lib/kubelet/pods/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7/volumes" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.931479 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f59b13cd-bec2-4590-a661-0cf416b68290" path="/var/lib/kubelet/pods/f59b13cd-bec2-4590-a661-0cf416b68290/volumes" Jan 28 11:41:25 crc kubenswrapper[4804]: I0128 11:41:25.058191 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" event={"ID":"91b4be5e-0f8c-495e-869d-38a047276f33","Type":"ContainerStarted","Data":"793e56501f603507de08462d6163102c6e75fc7f6d8874ef3f2a6c93cde5476a"} Jan 28 11:41:25 crc kubenswrapper[4804]: I0128 11:41:25.058381 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:25 crc kubenswrapper[4804]: I0128 11:41:25.092610 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" podStartSLOduration=4.092579383 podStartE2EDuration="4.092579383s" podCreationTimestamp="2026-01-28 11:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:25.079948631 +0000 UTC m=+1160.874828615" watchObservedRunningTime="2026-01-28 11:41:25.092579383 +0000 UTC m=+1160.887459367" Jan 28 11:41:25 crc kubenswrapper[4804]: I0128 11:41:25.141423 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:25 crc kubenswrapper[4804]: W0128 11:41:25.155504 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod395d12eb_6bd8_4dc2_a026_d37da116fa0d.slice/crio-df977f466b34eaa34d558411f065a3a70d1d671f58ce1242209b0b7a7377cdf0 WatchSource:0}: Error finding container df977f466b34eaa34d558411f065a3a70d1d671f58ce1242209b0b7a7377cdf0: Status 404 returned error can't find the container with id df977f466b34eaa34d558411f065a3a70d1d671f58ce1242209b0b7a7377cdf0 Jan 28 11:41:25 crc kubenswrapper[4804]: I0128 11:41:25.303676 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:25 crc kubenswrapper[4804]: W0128 11:41:25.313396 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf00e734_18a4_4614_b272_1d914b5e39ce.slice/crio-5fd49444fe3ee038e54635fea87acd84a00e42d07f56ba45b5c0e1dc565c8aac WatchSource:0}: Error finding container 5fd49444fe3ee038e54635fea87acd84a00e42d07f56ba45b5c0e1dc565c8aac: Status 404 returned error can't find the container with id 5fd49444fe3ee038e54635fea87acd84a00e42d07f56ba45b5c0e1dc565c8aac Jan 28 11:41:26 crc kubenswrapper[4804]: I0128 11:41:26.133532 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"395d12eb-6bd8-4dc2-a026-d37da116fa0d","Type":"ContainerStarted","Data":"2ebff81ae6eb2c2b74f0fe5476b4e0026ec75898d6d7e782d262a6913f2daa22"} Jan 28 11:41:26 crc kubenswrapper[4804]: I0128 11:41:26.133897 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"395d12eb-6bd8-4dc2-a026-d37da116fa0d","Type":"ContainerStarted","Data":"df977f466b34eaa34d558411f065a3a70d1d671f58ce1242209b0b7a7377cdf0"} Jan 28 11:41:26 crc kubenswrapper[4804]: I0128 11:41:26.137140 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df00e734-18a4-4614-b272-1d914b5e39ce","Type":"ContainerStarted","Data":"5fd49444fe3ee038e54635fea87acd84a00e42d07f56ba45b5c0e1dc565c8aac"} Jan 28 11:41:27 crc kubenswrapper[4804]: I0128 11:41:27.150654 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df00e734-18a4-4614-b272-1d914b5e39ce","Type":"ContainerStarted","Data":"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7"} Jan 28 11:41:27 crc kubenswrapper[4804]: I0128 11:41:27.154693 4804 generic.go:334] "Generic (PLEG): container finished" podID="6dc73391-67e1-4f78-9531-509bcf54be36" containerID="e0578f336cec25aad377224f179ea54ee5afd99b6a706cbe778740c4a7fd261d" exitCode=0 Jan 28 11:41:27 crc kubenswrapper[4804]: I0128 11:41:27.154783 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gczh7" event={"ID":"6dc73391-67e1-4f78-9531-509bcf54be36","Type":"ContainerDied","Data":"e0578f336cec25aad377224f179ea54ee5afd99b6a706cbe778740c4a7fd261d"} Jan 28 11:41:27 crc kubenswrapper[4804]: I0128 11:41:27.158359 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"395d12eb-6bd8-4dc2-a026-d37da116fa0d","Type":"ContainerStarted","Data":"cc071da784f4e94aefc65159f0222cdbc3463f7289d98e8eb421054b7ca1199f"} Jan 28 11:41:27 crc kubenswrapper[4804]: I0128 11:41:27.195376 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.19535915 podStartE2EDuration="3.19535915s" podCreationTimestamp="2026-01-28 11:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:27.190172385 +0000 UTC m=+1162.985052369" watchObservedRunningTime="2026-01-28 11:41:27.19535915 +0000 UTC m=+1162.990239134" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.321318 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.496519 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-config-data\") pod \"6dc73391-67e1-4f78-9531-509bcf54be36\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.496575 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-scripts\") pod \"6dc73391-67e1-4f78-9531-509bcf54be36\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.496708 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwcqn\" (UniqueName: \"kubernetes.io/projected/6dc73391-67e1-4f78-9531-509bcf54be36-kube-api-access-cwcqn\") pod \"6dc73391-67e1-4f78-9531-509bcf54be36\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.496757 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-credential-keys\") pod \"6dc73391-67e1-4f78-9531-509bcf54be36\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.496805 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-fernet-keys\") pod \"6dc73391-67e1-4f78-9531-509bcf54be36\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.496860 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-combined-ca-bundle\") pod \"6dc73391-67e1-4f78-9531-509bcf54be36\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.501598 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6dc73391-67e1-4f78-9531-509bcf54be36" (UID: "6dc73391-67e1-4f78-9531-509bcf54be36"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.502102 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dc73391-67e1-4f78-9531-509bcf54be36-kube-api-access-cwcqn" (OuterVolumeSpecName: "kube-api-access-cwcqn") pod "6dc73391-67e1-4f78-9531-509bcf54be36" (UID: "6dc73391-67e1-4f78-9531-509bcf54be36"). InnerVolumeSpecName "kube-api-access-cwcqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.503073 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-scripts" (OuterVolumeSpecName: "scripts") pod "6dc73391-67e1-4f78-9531-509bcf54be36" (UID: "6dc73391-67e1-4f78-9531-509bcf54be36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.504054 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6dc73391-67e1-4f78-9531-509bcf54be36" (UID: "6dc73391-67e1-4f78-9531-509bcf54be36"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.525416 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dc73391-67e1-4f78-9531-509bcf54be36" (UID: "6dc73391-67e1-4f78-9531-509bcf54be36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.527568 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-config-data" (OuterVolumeSpecName: "config-data") pod "6dc73391-67e1-4f78-9531-509bcf54be36" (UID: "6dc73391-67e1-4f78-9531-509bcf54be36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.599421 4804 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.599476 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.599492 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.599504 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.599516 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwcqn\" (UniqueName: \"kubernetes.io/projected/6dc73391-67e1-4f78-9531-509bcf54be36-kube-api-access-cwcqn\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.599528 4804 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.189226 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gczh7" event={"ID":"6dc73391-67e1-4f78-9531-509bcf54be36","Type":"ContainerDied","Data":"2f62a3c4a2cd081b1f832339c14d958fdc8b030abf65c3750cc0feb9582f280e"} Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.189272 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f62a3c4a2cd081b1f832339c14d958fdc8b030abf65c3750cc0feb9582f280e" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.189309 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.395440 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gczh7"] Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.402254 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gczh7"] Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.496214 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qmm7h"] Jan 28 11:41:30 crc kubenswrapper[4804]: E0128 11:41:30.496669 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dc73391-67e1-4f78-9531-509bcf54be36" containerName="keystone-bootstrap" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.496689 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dc73391-67e1-4f78-9531-509bcf54be36" containerName="keystone-bootstrap" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.497012 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dc73391-67e1-4f78-9531-509bcf54be36" containerName="keystone-bootstrap" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.497715 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.500019 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.500045 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.500345 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xcgbx" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.500362 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.500492 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.506944 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qmm7h"] Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.615989 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4hmv\" (UniqueName: \"kubernetes.io/projected/8686dbae-d7dd-4662-81a8-ab51cc85a115-kube-api-access-m4hmv\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.616088 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-credential-keys\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.616149 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-fernet-keys\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.616178 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-config-data\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.616217 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-combined-ca-bundle\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.616242 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-scripts\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.650325 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.650587 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-log" containerID="cri-o://2ebff81ae6eb2c2b74f0fe5476b4e0026ec75898d6d7e782d262a6913f2daa22" gracePeriod=30 Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.650675 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-httpd" containerID="cri-o://cc071da784f4e94aefc65159f0222cdbc3463f7289d98e8eb421054b7ca1199f" gracePeriod=30 Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.718216 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4hmv\" (UniqueName: \"kubernetes.io/projected/8686dbae-d7dd-4662-81a8-ab51cc85a115-kube-api-access-m4hmv\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.718570 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-credential-keys\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.718603 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-fernet-keys\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.718624 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-config-data\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.718658 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-combined-ca-bundle\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.718681 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-scripts\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.723973 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-combined-ca-bundle\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.737653 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-credential-keys\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.737699 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-config-data\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.740377 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-fernet-keys\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.740843 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-scripts\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.743239 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4hmv\" (UniqueName: \"kubernetes.io/projected/8686dbae-d7dd-4662-81a8-ab51cc85a115-kube-api-access-m4hmv\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.754858 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.818233 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.927289 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dc73391-67e1-4f78-9531-509bcf54be36" path="/var/lib/kubelet/pods/6dc73391-67e1-4f78-9531-509bcf54be36/volumes" Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.207439 4804 generic.go:334] "Generic (PLEG): container finished" podID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerID="cc071da784f4e94aefc65159f0222cdbc3463f7289d98e8eb421054b7ca1199f" exitCode=0 Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.207468 4804 generic.go:334] "Generic (PLEG): container finished" podID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerID="2ebff81ae6eb2c2b74f0fe5476b4e0026ec75898d6d7e782d262a6913f2daa22" exitCode=143 Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.207491 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"395d12eb-6bd8-4dc2-a026-d37da116fa0d","Type":"ContainerDied","Data":"cc071da784f4e94aefc65159f0222cdbc3463f7289d98e8eb421054b7ca1199f"} Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.207521 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"395d12eb-6bd8-4dc2-a026-d37da116fa0d","Type":"ContainerDied","Data":"2ebff81ae6eb2c2b74f0fe5476b4e0026ec75898d6d7e782d262a6913f2daa22"} Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.883210 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.967760 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-b7zpn"] Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.968057 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" containerID="cri-o://300764162d368ffd0de5fe83665369a6aa0f7d774b37e1517a5bc0a601f70ad4" gracePeriod=10 Jan 28 11:41:32 crc kubenswrapper[4804]: I0128 11:41:32.096359 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 28 11:41:32 crc kubenswrapper[4804]: I0128 11:41:32.222599 4804 generic.go:334] "Generic (PLEG): container finished" podID="46956e08-e267-4021-bf42-69a3e35826e0" containerID="300764162d368ffd0de5fe83665369a6aa0f7d774b37e1517a5bc0a601f70ad4" exitCode=0 Jan 28 11:41:32 crc kubenswrapper[4804]: I0128 11:41:32.222736 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" event={"ID":"46956e08-e267-4021-bf42-69a3e35826e0","Type":"ContainerDied","Data":"300764162d368ffd0de5fe83665369a6aa0f7d774b37e1517a5bc0a601f70ad4"} Jan 28 11:41:41 crc kubenswrapper[4804]: I0128 11:41:41.305547 4804 generic.go:334] "Generic (PLEG): container finished" podID="e541b2a6-870f-4829-bdfc-ad3e4368ec0b" containerID="39f3d9fd533ba3d14095e02fb7f969a867f9aaeea3368bde1bf4f16b61454f75" exitCode=0 Jan 28 11:41:41 crc kubenswrapper[4804]: I0128 11:41:41.305690 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b679z" event={"ID":"e541b2a6-870f-4829-bdfc-ad3e4368ec0b","Type":"ContainerDied","Data":"39f3d9fd533ba3d14095e02fb7f969a867f9aaeea3368bde1bf4f16b61454f75"} Jan 28 11:41:42 crc kubenswrapper[4804]: I0128 11:41:42.097646 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 28 11:41:47 crc kubenswrapper[4804]: I0128 11:41:47.108842 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 28 11:41:47 crc kubenswrapper[4804]: I0128 11:41:47.110315 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.827867 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.833209 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.838057 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.986591 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-logs\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.986653 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-svc\") pod \"46956e08-e267-4021-bf42-69a3e35826e0\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.986673 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.986704 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-combined-ca-bundle\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.986755 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-scripts\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987394 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-logs" (OuterVolumeSpecName: "logs") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987602 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-sb\") pod \"46956e08-e267-4021-bf42-69a3e35826e0\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987639 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-config\") pod \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987679 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-swift-storage-0\") pod \"46956e08-e267-4021-bf42-69a3e35826e0\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987713 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-nb\") pod \"46956e08-e267-4021-bf42-69a3e35826e0\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987785 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-config\") pod \"46956e08-e267-4021-bf42-69a3e35826e0\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987808 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pdb4\" (UniqueName: \"kubernetes.io/projected/395d12eb-6bd8-4dc2-a026-d37da116fa0d-kube-api-access-5pdb4\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987830 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-httpd-run\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987853 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjp6c\" (UniqueName: \"kubernetes.io/projected/46956e08-e267-4021-bf42-69a3e35826e0-kube-api-access-hjp6c\") pod \"46956e08-e267-4021-bf42-69a3e35826e0\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987894 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-config-data\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987915 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-combined-ca-bundle\") pod \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987939 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t5gm\" (UniqueName: \"kubernetes.io/projected/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-kube-api-access-4t5gm\") pod \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.988348 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.990098 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.014582 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-kube-api-access-4t5gm" (OuterVolumeSpecName: "kube-api-access-4t5gm") pod "e541b2a6-870f-4829-bdfc-ad3e4368ec0b" (UID: "e541b2a6-870f-4829-bdfc-ad3e4368ec0b"). InnerVolumeSpecName "kube-api-access-4t5gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.017528 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46956e08-e267-4021-bf42-69a3e35826e0-kube-api-access-hjp6c" (OuterVolumeSpecName: "kube-api-access-hjp6c") pod "46956e08-e267-4021-bf42-69a3e35826e0" (UID: "46956e08-e267-4021-bf42-69a3e35826e0"). InnerVolumeSpecName "kube-api-access-hjp6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.027690 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-scripts" (OuterVolumeSpecName: "scripts") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.027905 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.029573 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395d12eb-6bd8-4dc2-a026-d37da116fa0d-kube-api-access-5pdb4" (OuterVolumeSpecName: "kube-api-access-5pdb4") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "kube-api-access-5pdb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.045773 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-config" (OuterVolumeSpecName: "config") pod "e541b2a6-870f-4829-bdfc-ad3e4368ec0b" (UID: "e541b2a6-870f-4829-bdfc-ad3e4368ec0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.049278 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e541b2a6-870f-4829-bdfc-ad3e4368ec0b" (UID: "e541b2a6-870f-4829-bdfc-ad3e4368ec0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.071320 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-config-data" (OuterVolumeSpecName: "config-data") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.072584 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "46956e08-e267-4021-bf42-69a3e35826e0" (UID: "46956e08-e267-4021-bf42-69a3e35826e0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.073959 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "46956e08-e267-4021-bf42-69a3e35826e0" (UID: "46956e08-e267-4021-bf42-69a3e35826e0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.074484 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.084637 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-config" (OuterVolumeSpecName: "config") pod "46956e08-e267-4021-bf42-69a3e35826e0" (UID: "46956e08-e267-4021-bf42-69a3e35826e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.090417 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46956e08-e267-4021-bf42-69a3e35826e0" (UID: "46956e08-e267-4021-bf42-69a3e35826e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.090934 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.090983 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.090996 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091007 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pdb4\" (UniqueName: \"kubernetes.io/projected/395d12eb-6bd8-4dc2-a026-d37da116fa0d-kube-api-access-5pdb4\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091017 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091026 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjp6c\" (UniqueName: \"kubernetes.io/projected/46956e08-e267-4021-bf42-69a3e35826e0-kube-api-access-hjp6c\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091035 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091043 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091052 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t5gm\" (UniqueName: \"kubernetes.io/projected/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-kube-api-access-4t5gm\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091060 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091101 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091093 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "46956e08-e267-4021-bf42-69a3e35826e0" (UID: "46956e08-e267-4021-bf42-69a3e35826e0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091112 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091169 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091195 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.111074 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.193653 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.193730 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.382054 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b679z" event={"ID":"e541b2a6-870f-4829-bdfc-ad3e4368ec0b","Type":"ContainerDied","Data":"60b8da9908ec3982ed55579ec364c45546383ec243c42fe055e29512244fd6d9"} Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.382093 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60b8da9908ec3982ed55579ec364c45546383ec243c42fe055e29512244fd6d9" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.382148 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.387201 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" event={"ID":"46956e08-e267-4021-bf42-69a3e35826e0","Type":"ContainerDied","Data":"77e9032d4b1d0896ab98b1033b917f2c0d9b702e320f4756d982cdbd575cb2f8"} Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.387293 4804 scope.go:117] "RemoveContainer" containerID="300764162d368ffd0de5fe83665369a6aa0f7d774b37e1517a5bc0a601f70ad4" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.387672 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.394412 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"395d12eb-6bd8-4dc2-a026-d37da116fa0d","Type":"ContainerDied","Data":"df977f466b34eaa34d558411f065a3a70d1d671f58ce1242209b0b7a7377cdf0"} Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.394494 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.455012 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.473298 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.489066 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-b7zpn"] Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.503043 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-b7zpn"] Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.514667 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:49 crc kubenswrapper[4804]: E0128 11:41:49.515108 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-log" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515126 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-log" Jan 28 11:41:49 crc kubenswrapper[4804]: E0128 11:41:49.515152 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-httpd" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515159 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-httpd" Jan 28 11:41:49 crc kubenswrapper[4804]: E0128 11:41:49.515174 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e541b2a6-870f-4829-bdfc-ad3e4368ec0b" containerName="neutron-db-sync" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515181 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="e541b2a6-870f-4829-bdfc-ad3e4368ec0b" containerName="neutron-db-sync" Jan 28 11:41:49 crc kubenswrapper[4804]: E0128 11:41:49.515195 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="init" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515202 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="init" Jan 28 11:41:49 crc kubenswrapper[4804]: E0128 11:41:49.515212 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515218 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515385 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-httpd" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515397 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-log" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515411 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="e541b2a6-870f-4829-bdfc-ad3e4368ec0b" containerName="neutron-db-sync" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515426 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.516656 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.520832 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.521194 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.530635 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.602521 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qss5\" (UniqueName: \"kubernetes.io/projected/c6e31fe0-ad05-40cd-9eee-1597a421a009-kube-api-access-9qss5\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.602952 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.603144 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.603269 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.603451 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.603588 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.605203 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-logs\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.605370 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.707470 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.707707 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.707796 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-logs\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.707869 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708060 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708076 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qss5\" (UniqueName: \"kubernetes.io/projected/c6e31fe0-ad05-40cd-9eee-1597a421a009-kube-api-access-9qss5\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708222 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708264 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708284 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708581 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-logs\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708897 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.712707 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.712996 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.713505 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.716790 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.729692 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qss5\" (UniqueName: \"kubernetes.io/projected/c6e31fe0-ad05-40cd-9eee-1597a421a009-kube-api-access-9qss5\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.740749 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.842373 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.143506 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cthxz"] Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.152381 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.184853 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cthxz"] Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.232069 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-config\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.232124 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.232579 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.232865 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r99bj\" (UniqueName: \"kubernetes.io/projected/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-kube-api-access-r99bj\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.233152 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.233286 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.337658 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.337729 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.337767 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-config\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.337782 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.337814 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.337849 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r99bj\" (UniqueName: \"kubernetes.io/projected/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-kube-api-access-r99bj\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.339043 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.339556 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.340098 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-config\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.340605 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.346351 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.367874 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r99bj\" (UniqueName: \"kubernetes.io/projected/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-kube-api-access-r99bj\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.405965 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c6795cf88-vn4sv"] Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.407923 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.411409 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.411442 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.411593 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pl59s" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.411638 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.437816 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c6795cf88-vn4sv"] Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.479914 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.541652 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-config\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.541701 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-ovndb-tls-certs\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.541763 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-httpd-config\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.541784 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-combined-ca-bundle\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.541863 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv5jh\" (UniqueName: \"kubernetes.io/projected/17438a34-7ac2-4451-b74e-97ebbf9318f3-kube-api-access-fv5jh\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.643829 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-httpd-config\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.643945 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-combined-ca-bundle\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.644052 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv5jh\" (UniqueName: \"kubernetes.io/projected/17438a34-7ac2-4451-b74e-97ebbf9318f3-kube-api-access-fv5jh\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.644103 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-config\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.644127 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-ovndb-tls-certs\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.648856 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-combined-ca-bundle\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.650653 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-ovndb-tls-certs\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.651086 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-config\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.652525 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-httpd-config\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.660386 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv5jh\" (UniqueName: \"kubernetes.io/projected/17438a34-7ac2-4451-b74e-97ebbf9318f3-kube-api-access-fv5jh\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.729712 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.928586 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" path="/var/lib/kubelet/pods/395d12eb-6bd8-4dc2-a026-d37da116fa0d/volumes" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.929407 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46956e08-e267-4021-bf42-69a3e35826e0" path="/var/lib/kubelet/pods/46956e08-e267-4021-bf42-69a3e35826e0/volumes" Jan 28 11:41:52 crc kubenswrapper[4804]: I0128 11:41:52.110399 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.247892 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6b8bbc97bf-dkp56"] Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.249492 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.253347 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.253984 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.266670 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8bbc97bf-dkp56"] Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.310041 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-internal-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.310278 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-httpd-config\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.311054 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-combined-ca-bundle\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.311146 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-ovndb-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.311347 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-public-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.311386 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m86bw\" (UniqueName: \"kubernetes.io/projected/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-kube-api-access-m86bw\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.311468 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-config\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.420778 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-public-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.420817 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m86bw\" (UniqueName: \"kubernetes.io/projected/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-kube-api-access-m86bw\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.420848 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-config\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.420929 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-internal-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.420959 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-httpd-config\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.420989 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-combined-ca-bundle\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.421014 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-ovndb-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.427843 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-combined-ca-bundle\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.427955 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-ovndb-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.428118 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-config\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.428292 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-httpd-config\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.429087 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-public-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.430146 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-internal-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.445178 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m86bw\" (UniqueName: \"kubernetes.io/projected/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-kube-api-access-m86bw\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.575968 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:54 crc kubenswrapper[4804]: I0128 11:41:54.513273 4804 scope.go:117] "RemoveContainer" containerID="0e30a6113bcc313e3cf69e2a658168ba99f0082887992b529bd0b556c9a4b494" Jan 28 11:41:54 crc kubenswrapper[4804]: E0128 11:41:54.540369 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 28 11:41:54 crc kubenswrapper[4804]: E0128 11:41:54.540547 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5v9bm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2swjk_openstack(3bd4fedc-8940-48ad-b718-4fbb98e48bf0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 11:41:54 crc kubenswrapper[4804]: E0128 11:41:54.541941 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2swjk" podUID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" Jan 28 11:41:54 crc kubenswrapper[4804]: I0128 11:41:54.820326 4804 scope.go:117] "RemoveContainer" containerID="cc071da784f4e94aefc65159f0222cdbc3463f7289d98e8eb421054b7ca1199f" Jan 28 11:41:54 crc kubenswrapper[4804]: I0128 11:41:54.890229 4804 scope.go:117] "RemoveContainer" containerID="2ebff81ae6eb2c2b74f0fe5476b4e0026ec75898d6d7e782d262a6913f2daa22" Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.006715 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qmm7h"] Jan 28 11:41:55 crc kubenswrapper[4804]: W0128 11:41:55.027966 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8686dbae_d7dd_4662_81a8_ab51cc85a115.slice/crio-955b230c95ad08f43c3097b81f46147f1a68a0186fc7e64fd4a923911e4cf337 WatchSource:0}: Error finding container 955b230c95ad08f43c3097b81f46147f1a68a0186fc7e64fd4a923911e4cf337: Status 404 returned error can't find the container with id 955b230c95ad08f43c3097b81f46147f1a68a0186fc7e64fd4a923911e4cf337 Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.179586 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cthxz"] Jan 28 11:41:55 crc kubenswrapper[4804]: W0128 11:41:55.189591 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1c0b69d_65ba_4cfd_b7d5_b842e64eafb4.slice/crio-712240faee5573c11cecd4f774dbe7152151ce4aa8c358cabe00675975fd0077 WatchSource:0}: Error finding container 712240faee5573c11cecd4f774dbe7152151ce4aa8c358cabe00675975fd0077: Status 404 returned error can't find the container with id 712240faee5573c11cecd4f774dbe7152151ce4aa8c358cabe00675975fd0077 Jan 28 11:41:55 crc kubenswrapper[4804]: W0128 11:41:55.263669 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6e31fe0_ad05_40cd_9eee_1597a421a009.slice/crio-77deabf65e4246979130b557c75fc43e2d7873b2dc124e7c3da74d90778d94aa WatchSource:0}: Error finding container 77deabf65e4246979130b557c75fc43e2d7873b2dc124e7c3da74d90778d94aa: Status 404 returned error can't find the container with id 77deabf65e4246979130b557c75fc43e2d7873b2dc124e7c3da74d90778d94aa Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.267075 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.446366 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" event={"ID":"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4","Type":"ContainerStarted","Data":"712240faee5573c11cecd4f774dbe7152151ce4aa8c358cabe00675975fd0077"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.453688 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9brzz" event={"ID":"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf","Type":"ContainerStarted","Data":"141148b29896e3f2f9d12c3faec258d3e962851d2411ef8203fd3511f78f472c"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.458701 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qmm7h" event={"ID":"8686dbae-d7dd-4662-81a8-ab51cc85a115","Type":"ContainerStarted","Data":"905c09b793697a4d6c52520b6966a20f7c9e6354b274348d7425039892c0fbb9"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.458746 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qmm7h" event={"ID":"8686dbae-d7dd-4662-81a8-ab51cc85a115","Type":"ContainerStarted","Data":"955b230c95ad08f43c3097b81f46147f1a68a0186fc7e64fd4a923911e4cf337"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.472499 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerStarted","Data":"353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.476730 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e31fe0-ad05-40cd-9eee-1597a421a009","Type":"ContainerStarted","Data":"77deabf65e4246979130b557c75fc43e2d7873b2dc124e7c3da74d90778d94aa"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.491372 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wch49" event={"ID":"6b292a47-f331-472d-941e-193e41fee49f","Type":"ContainerStarted","Data":"c678cbe047e0072936e6685fda5e2cdde34f1bc266bf8023e6e395194b174396"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.494103 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-9brzz" podStartSLOduration=2.975824411 podStartE2EDuration="35.494073217s" podCreationTimestamp="2026-01-28 11:41:20 +0000 UTC" firstStartedPulling="2026-01-28 11:41:21.962458298 +0000 UTC m=+1157.757338272" lastFinishedPulling="2026-01-28 11:41:54.480707094 +0000 UTC m=+1190.275587078" observedRunningTime="2026-01-28 11:41:55.487837988 +0000 UTC m=+1191.282717972" watchObservedRunningTime="2026-01-28 11:41:55.494073217 +0000 UTC m=+1191.288953201" Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.494354 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8bbc97bf-dkp56"] Jan 28 11:41:55 crc kubenswrapper[4804]: W0128 11:41:55.496929 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f4e070e_7b0f_4a60_9383_7e1a61380fc6.slice/crio-17370b3c1885a4617f22d7c91afab2fb5a7fa1af0f912912598e79c6bd36be5b WatchSource:0}: Error finding container 17370b3c1885a4617f22d7c91afab2fb5a7fa1af0f912912598e79c6bd36be5b: Status 404 returned error can't find the container with id 17370b3c1885a4617f22d7c91afab2fb5a7fa1af0f912912598e79c6bd36be5b Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.519552 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qmm7h" podStartSLOduration=25.519526287 podStartE2EDuration="25.519526287s" podCreationTimestamp="2026-01-28 11:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:55.507165073 +0000 UTC m=+1191.302045057" watchObservedRunningTime="2026-01-28 11:41:55.519526287 +0000 UTC m=+1191.314406271" Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.520636 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-log" containerID="cri-o://97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7" gracePeriod=30 Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.520765 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-httpd" containerID="cri-o://633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8" gracePeriod=30 Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.520781 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df00e734-18a4-4614-b272-1d914b5e39ce","Type":"ContainerStarted","Data":"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8"} Jan 28 11:41:55 crc kubenswrapper[4804]: E0128 11:41:55.530618 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-2swjk" podUID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.531122 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-wch49" podStartSLOduration=2.7892619119999997 podStartE2EDuration="35.531106625s" podCreationTimestamp="2026-01-28 11:41:20 +0000 UTC" firstStartedPulling="2026-01-28 11:41:21.780363982 +0000 UTC m=+1157.575243966" lastFinishedPulling="2026-01-28 11:41:54.522208695 +0000 UTC m=+1190.317088679" observedRunningTime="2026-01-28 11:41:55.530590699 +0000 UTC m=+1191.325470683" watchObservedRunningTime="2026-01-28 11:41:55.531106625 +0000 UTC m=+1191.325986609" Jan 28 11:41:55 crc kubenswrapper[4804]: W0128 11:41:55.594374 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17438a34_7ac2_4451_b74e_97ebbf9318f3.slice/crio-13867d6cc190021437c374394c8fea3e953c59a7abb2355a73dc4ecc5ca39b58 WatchSource:0}: Error finding container 13867d6cc190021437c374394c8fea3e953c59a7abb2355a73dc4ecc5ca39b58: Status 404 returned error can't find the container with id 13867d6cc190021437c374394c8fea3e953c59a7abb2355a73dc4ecc5ca39b58 Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.605022 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c6795cf88-vn4sv"] Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.617255 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=31.617230417000002 podStartE2EDuration="31.617230417s" podCreationTimestamp="2026-01-28 11:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:55.569472467 +0000 UTC m=+1191.364352451" watchObservedRunningTime="2026-01-28 11:41:55.617230417 +0000 UTC m=+1191.412110401" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.245944 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403149 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-combined-ca-bundle\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403266 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-scripts\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403344 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-httpd-run\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403457 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403516 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-config-data\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403534 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-logs\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403562 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l645w\" (UniqueName: \"kubernetes.io/projected/df00e734-18a4-4614-b272-1d914b5e39ce-kube-api-access-l645w\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403988 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.404041 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-logs" (OuterVolumeSpecName: "logs") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.409447 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df00e734-18a4-4614-b272-1d914b5e39ce-kube-api-access-l645w" (OuterVolumeSpecName: "kube-api-access-l645w") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "kube-api-access-l645w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.409530 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.415483 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-scripts" (OuterVolumeSpecName: "scripts") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.428549 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.464797 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-config-data" (OuterVolumeSpecName: "config-data") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505408 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505442 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505453 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505483 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505495 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505503 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505512 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l645w\" (UniqueName: \"kubernetes.io/projected/df00e734-18a4-4614-b272-1d914b5e39ce-kube-api-access-l645w\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.527965 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.537112 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8bbc97bf-dkp56" event={"ID":"1f4e070e-7b0f-4a60-9383-7e1a61380fc6","Type":"ContainerStarted","Data":"a77115c93ac5035e08ec037be345ec8297ca1f73ac611f0d1dfc69f51b156d7c"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.537166 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8bbc97bf-dkp56" event={"ID":"1f4e070e-7b0f-4a60-9383-7e1a61380fc6","Type":"ContainerStarted","Data":"bf7528745919414ab5c0c5536eb5b3fc9885458114b18b78f9462eb6cff21f37"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.537177 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8bbc97bf-dkp56" event={"ID":"1f4e070e-7b0f-4a60-9383-7e1a61380fc6","Type":"ContainerStarted","Data":"17370b3c1885a4617f22d7c91afab2fb5a7fa1af0f912912598e79c6bd36be5b"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.537956 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547459 4804 generic.go:334] "Generic (PLEG): container finished" podID="df00e734-18a4-4614-b272-1d914b5e39ce" containerID="633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8" exitCode=143 Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547508 4804 generic.go:334] "Generic (PLEG): container finished" podID="df00e734-18a4-4614-b272-1d914b5e39ce" containerID="97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7" exitCode=143 Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547575 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df00e734-18a4-4614-b272-1d914b5e39ce","Type":"ContainerDied","Data":"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547622 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df00e734-18a4-4614-b272-1d914b5e39ce","Type":"ContainerDied","Data":"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547642 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df00e734-18a4-4614-b272-1d914b5e39ce","Type":"ContainerDied","Data":"5fd49444fe3ee038e54635fea87acd84a00e42d07f56ba45b5c0e1dc565c8aac"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547669 4804 scope.go:117] "RemoveContainer" containerID="633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547898 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.584034 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6b8bbc97bf-dkp56" podStartSLOduration=3.584014417 podStartE2EDuration="3.584014417s" podCreationTimestamp="2026-01-28 11:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:56.576182028 +0000 UTC m=+1192.371062012" watchObservedRunningTime="2026-01-28 11:41:56.584014417 +0000 UTC m=+1192.378894401" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.587380 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c6795cf88-vn4sv" event={"ID":"17438a34-7ac2-4451-b74e-97ebbf9318f3","Type":"ContainerStarted","Data":"0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.587426 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c6795cf88-vn4sv" event={"ID":"17438a34-7ac2-4451-b74e-97ebbf9318f3","Type":"ContainerStarted","Data":"a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.587437 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c6795cf88-vn4sv" event={"ID":"17438a34-7ac2-4451-b74e-97ebbf9318f3","Type":"ContainerStarted","Data":"13867d6cc190021437c374394c8fea3e953c59a7abb2355a73dc4ecc5ca39b58"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.588393 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.589907 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e31fe0-ad05-40cd-9eee-1597a421a009","Type":"ContainerStarted","Data":"da91222f31b9d2c38c4e6f743c67ffcd04bf815b945ad08e5f7f9977696c9998"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.595924 4804 generic.go:334] "Generic (PLEG): container finished" podID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerID="988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e" exitCode=0 Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.597259 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" event={"ID":"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4","Type":"ContainerDied","Data":"988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.610412 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.631833 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c6795cf88-vn4sv" podStartSLOduration=6.631816728 podStartE2EDuration="6.631816728s" podCreationTimestamp="2026-01-28 11:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:56.630498357 +0000 UTC m=+1192.425378341" watchObservedRunningTime="2026-01-28 11:41:56.631816728 +0000 UTC m=+1192.426696712" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.880951 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.888076 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.897196 4804 scope.go:117] "RemoveContainer" containerID="97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.942090 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" path="/var/lib/kubelet/pods/df00e734-18a4-4614-b272-1d914b5e39ce/volumes" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.954826 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:56 crc kubenswrapper[4804]: E0128 11:41:56.970925 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-httpd" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.971004 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-httpd" Jan 28 11:41:56 crc kubenswrapper[4804]: E0128 11:41:56.971084 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-log" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.971093 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-log" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.971595 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-httpd" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.971617 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-log" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.983287 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.983411 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.027701 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.028572 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.054157 4804 scope.go:117] "RemoveContainer" containerID="633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8" Jan 28 11:41:57 crc kubenswrapper[4804]: E0128 11:41:57.065121 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8\": container with ID starting with 633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8 not found: ID does not exist" containerID="633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.065176 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8"} err="failed to get container status \"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8\": rpc error: code = NotFound desc = could not find container \"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8\": container with ID starting with 633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8 not found: ID does not exist" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.065214 4804 scope.go:117] "RemoveContainer" containerID="97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7" Jan 28 11:41:57 crc kubenswrapper[4804]: E0128 11:41:57.086072 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7\": container with ID starting with 97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7 not found: ID does not exist" containerID="97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.086141 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7"} err="failed to get container status \"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7\": rpc error: code = NotFound desc = could not find container \"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7\": container with ID starting with 97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7 not found: ID does not exist" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.086176 4804 scope.go:117] "RemoveContainer" containerID="633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.097118 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8"} err="failed to get container status \"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8\": rpc error: code = NotFound desc = could not find container \"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8\": container with ID starting with 633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8 not found: ID does not exist" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.097179 4804 scope.go:117] "RemoveContainer" containerID="97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.107235 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7"} err="failed to get container status \"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7\": rpc error: code = NotFound desc = could not find container \"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7\": container with ID starting with 97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7 not found: ID does not exist" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.240337 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.240764 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-logs\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.241017 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.241275 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.241380 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-scripts\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.241439 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-config-data\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.241465 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgqsx\" (UniqueName: \"kubernetes.io/projected/268e1424-c22b-4694-a27b-e000fae8fc84-kube-api-access-bgqsx\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.241620 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.343519 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-logs\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.343781 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.343903 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.344013 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-scripts\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.344110 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-config-data\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.344197 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgqsx\" (UniqueName: \"kubernetes.io/projected/268e1424-c22b-4694-a27b-e000fae8fc84-kube-api-access-bgqsx\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.344286 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-logs\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.344319 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.345056 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.345750 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.345759 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.350176 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.350426 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-scripts\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.350709 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.351575 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-config-data\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.363725 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgqsx\" (UniqueName: \"kubernetes.io/projected/268e1424-c22b-4694-a27b-e000fae8fc84-kube-api-access-bgqsx\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.391386 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.423246 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.964801 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.634530 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e31fe0-ad05-40cd-9eee-1597a421a009","Type":"ContainerStarted","Data":"eb26cedcbdf60c84a6ee55e21403b89acac09cab6b2379020603bb9402535d6a"} Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.639377 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" event={"ID":"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4","Type":"ContainerStarted","Data":"619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6"} Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.639529 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.645449 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"268e1424-c22b-4694-a27b-e000fae8fc84","Type":"ContainerStarted","Data":"08457c942fd2dfdc67cc5ef01794dd21f74a9395ce618a5b9717e831bcb6d4af"} Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.645516 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"268e1424-c22b-4694-a27b-e000fae8fc84","Type":"ContainerStarted","Data":"69238578a45f6424f2874038dcb7535af5f39f1f664e37959ae69aa2b648befa"} Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.660620 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.660596241 podStartE2EDuration="9.660596241s" podCreationTimestamp="2026-01-28 11:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:58.657237364 +0000 UTC m=+1194.452117368" watchObservedRunningTime="2026-01-28 11:41:58.660596241 +0000 UTC m=+1194.455476225" Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.688086 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" podStartSLOduration=8.688065155 podStartE2EDuration="8.688065155s" podCreationTimestamp="2026-01-28 11:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:58.687804996 +0000 UTC m=+1194.482684980" watchObservedRunningTime="2026-01-28 11:41:58.688065155 +0000 UTC m=+1194.482945139" Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.660488 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerStarted","Data":"4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7"} Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.663692 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"268e1424-c22b-4694-a27b-e000fae8fc84","Type":"ContainerStarted","Data":"4052c7d5a660ea4162a986128a1346bc3017e577b5eb525f79ff8ea498d7a5aa"} Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.671183 4804 generic.go:334] "Generic (PLEG): container finished" podID="8686dbae-d7dd-4662-81a8-ab51cc85a115" containerID="905c09b793697a4d6c52520b6966a20f7c9e6354b274348d7425039892c0fbb9" exitCode=0 Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.671290 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qmm7h" event={"ID":"8686dbae-d7dd-4662-81a8-ab51cc85a115","Type":"ContainerDied","Data":"905c09b793697a4d6c52520b6966a20f7c9e6354b274348d7425039892c0fbb9"} Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.700459 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.700424026 podStartE2EDuration="3.700424026s" podCreationTimestamp="2026-01-28 11:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:59.686797462 +0000 UTC m=+1195.481677446" watchObservedRunningTime="2026-01-28 11:41:59.700424026 +0000 UTC m=+1195.495304010" Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.843056 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.843187 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.875358 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.887351 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 11:42:00 crc kubenswrapper[4804]: I0128 11:42:00.682930 4804 generic.go:334] "Generic (PLEG): container finished" podID="6b292a47-f331-472d-941e-193e41fee49f" containerID="c678cbe047e0072936e6685fda5e2cdde34f1bc266bf8023e6e395194b174396" exitCode=0 Jan 28 11:42:00 crc kubenswrapper[4804]: I0128 11:42:00.682936 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wch49" event={"ID":"6b292a47-f331-472d-941e-193e41fee49f","Type":"ContainerDied","Data":"c678cbe047e0072936e6685fda5e2cdde34f1bc266bf8023e6e395194b174396"} Jan 28 11:42:00 crc kubenswrapper[4804]: I0128 11:42:00.683339 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 11:42:00 crc kubenswrapper[4804]: I0128 11:42:00.683641 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.069185 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.132282 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-fernet-keys\") pod \"8686dbae-d7dd-4662-81a8-ab51cc85a115\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.133099 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4hmv\" (UniqueName: \"kubernetes.io/projected/8686dbae-d7dd-4662-81a8-ab51cc85a115-kube-api-access-m4hmv\") pod \"8686dbae-d7dd-4662-81a8-ab51cc85a115\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.133290 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-config-data\") pod \"8686dbae-d7dd-4662-81a8-ab51cc85a115\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.133331 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-credential-keys\") pod \"8686dbae-d7dd-4662-81a8-ab51cc85a115\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.133374 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-scripts\") pod \"8686dbae-d7dd-4662-81a8-ab51cc85a115\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.133665 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-combined-ca-bundle\") pod \"8686dbae-d7dd-4662-81a8-ab51cc85a115\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.139860 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8686dbae-d7dd-4662-81a8-ab51cc85a115" (UID: "8686dbae-d7dd-4662-81a8-ab51cc85a115"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.140156 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8686dbae-d7dd-4662-81a8-ab51cc85a115" (UID: "8686dbae-d7dd-4662-81a8-ab51cc85a115"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.140397 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-scripts" (OuterVolumeSpecName: "scripts") pod "8686dbae-d7dd-4662-81a8-ab51cc85a115" (UID: "8686dbae-d7dd-4662-81a8-ab51cc85a115"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.141795 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8686dbae-d7dd-4662-81a8-ab51cc85a115-kube-api-access-m4hmv" (OuterVolumeSpecName: "kube-api-access-m4hmv") pod "8686dbae-d7dd-4662-81a8-ab51cc85a115" (UID: "8686dbae-d7dd-4662-81a8-ab51cc85a115"). InnerVolumeSpecName "kube-api-access-m4hmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.167492 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-config-data" (OuterVolumeSpecName: "config-data") pod "8686dbae-d7dd-4662-81a8-ab51cc85a115" (UID: "8686dbae-d7dd-4662-81a8-ab51cc85a115"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.184377 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8686dbae-d7dd-4662-81a8-ab51cc85a115" (UID: "8686dbae-d7dd-4662-81a8-ab51cc85a115"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.236893 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.236932 4804 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.236942 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4hmv\" (UniqueName: \"kubernetes.io/projected/8686dbae-d7dd-4662-81a8-ab51cc85a115-kube-api-access-m4hmv\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.236953 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.236963 4804 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.236974 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.693591 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.694310 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qmm7h" event={"ID":"8686dbae-d7dd-4662-81a8-ab51cc85a115","Type":"ContainerDied","Data":"955b230c95ad08f43c3097b81f46147f1a68a0186fc7e64fd4a923911e4cf337"} Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.694333 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955b230c95ad08f43c3097b81f46147f1a68a0186fc7e64fd4a923911e4cf337" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.912416 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6f885d959c-vhjh4"] Jan 28 11:42:01 crc kubenswrapper[4804]: E0128 11:42:01.913202 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8686dbae-d7dd-4662-81a8-ab51cc85a115" containerName="keystone-bootstrap" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.913224 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8686dbae-d7dd-4662-81a8-ab51cc85a115" containerName="keystone-bootstrap" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.913467 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="8686dbae-d7dd-4662-81a8-ab51cc85a115" containerName="keystone-bootstrap" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.914127 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.918350 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xcgbx" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.918525 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.918767 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.918963 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.919113 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.919229 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.921273 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f885d959c-vhjh4"] Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053179 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5wp\" (UniqueName: \"kubernetes.io/projected/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-kube-api-access-qx5wp\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053311 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-public-tls-certs\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053378 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-combined-ca-bundle\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053405 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-internal-tls-certs\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053455 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-fernet-keys\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053497 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-credential-keys\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053524 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-config-data\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053555 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-scripts\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.155170 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-config-data\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.155242 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-scripts\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.155291 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx5wp\" (UniqueName: \"kubernetes.io/projected/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-kube-api-access-qx5wp\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.155378 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-public-tls-certs\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.155457 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-combined-ca-bundle\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.155488 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-internal-tls-certs\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.156557 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-fernet-keys\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.156650 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-credential-keys\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.161122 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-internal-tls-certs\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.161931 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-public-tls-certs\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.162754 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-combined-ca-bundle\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.163815 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-credential-keys\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.165394 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-scripts\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.169519 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-fernet-keys\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.175933 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx5wp\" (UniqueName: \"kubernetes.io/projected/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-kube-api-access-qx5wp\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.176336 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-config-data\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: E0128 11:42:02.195385 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72b9a8c6_1dc2_4083_9cbe_0564721ef7bf.slice/crio-141148b29896e3f2f9d12c3faec258d3e962851d2411ef8203fd3511f78f472c.scope\": RecentStats: unable to find data in memory cache]" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.252868 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.709012 4804 generic.go:334] "Generic (PLEG): container finished" podID="72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" containerID="141148b29896e3f2f9d12c3faec258d3e962851d2411ef8203fd3511f78f472c" exitCode=0 Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.709333 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9brzz" event={"ID":"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf","Type":"ContainerDied","Data":"141148b29896e3f2f9d12c3faec258d3e962851d2411ef8203fd3511f78f472c"} Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.841603 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wch49" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.974331 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-config-data\") pod \"6b292a47-f331-472d-941e-193e41fee49f\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.974388 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-scripts\") pod \"6b292a47-f331-472d-941e-193e41fee49f\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.974470 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjk67\" (UniqueName: \"kubernetes.io/projected/6b292a47-f331-472d-941e-193e41fee49f-kube-api-access-cjk67\") pod \"6b292a47-f331-472d-941e-193e41fee49f\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.974664 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-combined-ca-bundle\") pod \"6b292a47-f331-472d-941e-193e41fee49f\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.974748 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b292a47-f331-472d-941e-193e41fee49f-logs\") pod \"6b292a47-f331-472d-941e-193e41fee49f\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.975651 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b292a47-f331-472d-941e-193e41fee49f-logs" (OuterVolumeSpecName: "logs") pod "6b292a47-f331-472d-941e-193e41fee49f" (UID: "6b292a47-f331-472d-941e-193e41fee49f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.980262 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b292a47-f331-472d-941e-193e41fee49f-kube-api-access-cjk67" (OuterVolumeSpecName: "kube-api-access-cjk67") pod "6b292a47-f331-472d-941e-193e41fee49f" (UID: "6b292a47-f331-472d-941e-193e41fee49f"). InnerVolumeSpecName "kube-api-access-cjk67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.001756 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-scripts" (OuterVolumeSpecName: "scripts") pod "6b292a47-f331-472d-941e-193e41fee49f" (UID: "6b292a47-f331-472d-941e-193e41fee49f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.003571 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b292a47-f331-472d-941e-193e41fee49f" (UID: "6b292a47-f331-472d-941e-193e41fee49f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.005488 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-config-data" (OuterVolumeSpecName: "config-data") pod "6b292a47-f331-472d-941e-193e41fee49f" (UID: "6b292a47-f331-472d-941e-193e41fee49f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.077407 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.077438 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.077448 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjk67\" (UniqueName: \"kubernetes.io/projected/6b292a47-f331-472d-941e-193e41fee49f-kube-api-access-cjk67\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.077457 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.077466 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b292a47-f331-472d-941e-193e41fee49f-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.744326 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wch49" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.748575 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wch49" event={"ID":"6b292a47-f331-472d-941e-193e41fee49f","Type":"ContainerDied","Data":"4fc600c8f61a9b8ec6d1ffdf93634fa090aed774c9c2a83b4350fad5ad1161a7"} Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.748629 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fc600c8f61a9b8ec6d1ffdf93634fa090aed774c9c2a83b4350fad5ad1161a7" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.161656 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-659f7cffd6-wm9cj"] Jan 28 11:42:04 crc kubenswrapper[4804]: E0128 11:42:04.175534 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b292a47-f331-472d-941e-193e41fee49f" containerName="placement-db-sync" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.175570 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b292a47-f331-472d-941e-193e41fee49f" containerName="placement-db-sync" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.175860 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b292a47-f331-472d-941e-193e41fee49f" containerName="placement-db-sync" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.178324 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.181195 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.181454 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.181782 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.182810 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-682gl" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.183759 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198586 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-combined-ca-bundle\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198672 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/280cd1a0-6761-425c-8de1-bec2307ba0c0-kube-api-access-ffjll\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198730 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-internal-tls-certs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198781 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-public-tls-certs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198815 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-config-data\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198932 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-scripts\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198953 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280cd1a0-6761-425c-8de1-bec2307ba0c0-logs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.204915 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-659f7cffd6-wm9cj"] Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.277540 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.294159 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.311919 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-config-data\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.312057 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-scripts\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.312087 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280cd1a0-6761-425c-8de1-bec2307ba0c0-logs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.312189 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-combined-ca-bundle\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.312231 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/280cd1a0-6761-425c-8de1-bec2307ba0c0-kube-api-access-ffjll\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.312283 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-internal-tls-certs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.312317 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-public-tls-certs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.314008 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280cd1a0-6761-425c-8de1-bec2307ba0c0-logs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.323768 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-internal-tls-certs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.324143 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-public-tls-certs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.324625 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-config-data\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.329587 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-scripts\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.339615 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-combined-ca-bundle\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.348246 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/280cd1a0-6761-425c-8de1-bec2307ba0c0-kube-api-access-ffjll\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.516456 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.096596 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9brzz" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.125487 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-db-sync-config-data\") pod \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.125532 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-combined-ca-bundle\") pod \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.125637 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mth7\" (UniqueName: \"kubernetes.io/projected/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-kube-api-access-5mth7\") pod \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.130436 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-kube-api-access-5mth7" (OuterVolumeSpecName: "kube-api-access-5mth7") pod "72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" (UID: "72b9a8c6-1dc2-4083-9cbe-0564721ef7bf"). InnerVolumeSpecName "kube-api-access-5mth7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.140063 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" (UID: "72b9a8c6-1dc2-4083-9cbe-0564721ef7bf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.180065 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" (UID: "72b9a8c6-1dc2-4083-9cbe-0564721ef7bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.228540 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mth7\" (UniqueName: \"kubernetes.io/projected/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-kube-api-access-5mth7\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.228683 4804 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.228701 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.483123 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.554266 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-85r5r"] Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.554698 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" containerName="dnsmasq-dns" containerID="cri-o://793e56501f603507de08462d6163102c6e75fc7f6d8874ef3f2a6c93cde5476a" gracePeriod=10 Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.632530 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f885d959c-vhjh4"] Jan 28 11:42:05 crc kubenswrapper[4804]: W0128 11:42:05.635086 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4efe85dc_b64c_4cbe_83f7_89fa462a95a0.slice/crio-7c39859c40631f277cb9db7ae157687f468c42e18dd7308227c1bac58d71a744 WatchSource:0}: Error finding container 7c39859c40631f277cb9db7ae157687f468c42e18dd7308227c1bac58d71a744: Status 404 returned error can't find the container with id 7c39859c40631f277cb9db7ae157687f468c42e18dd7308227c1bac58d71a744 Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.688146 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-659f7cffd6-wm9cj"] Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.771251 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9brzz" event={"ID":"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf","Type":"ContainerDied","Data":"17e0e19fde7a47cbcc9cf6fab97dc7b7cdb474a5ae0195fdbdcd149f07b46b07"} Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.771286 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e0e19fde7a47cbcc9cf6fab97dc7b7cdb474a5ae0195fdbdcd149f07b46b07" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.771336 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9brzz" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.782031 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659f7cffd6-wm9cj" event={"ID":"280cd1a0-6761-425c-8de1-bec2307ba0c0","Type":"ContainerStarted","Data":"326e140f9daa666bf3c0b563922935205ab7fc5dba38cc45fd96d0a13dcbd798"} Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.786213 4804 generic.go:334] "Generic (PLEG): container finished" podID="91b4be5e-0f8c-495e-869d-38a047276f33" containerID="793e56501f603507de08462d6163102c6e75fc7f6d8874ef3f2a6c93cde5476a" exitCode=0 Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.786301 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" event={"ID":"91b4be5e-0f8c-495e-869d-38a047276f33","Type":"ContainerDied","Data":"793e56501f603507de08462d6163102c6e75fc7f6d8874ef3f2a6c93cde5476a"} Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.788240 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f885d959c-vhjh4" event={"ID":"4efe85dc-b64c-4cbe-83f7-89fa462a95a0","Type":"ContainerStarted","Data":"7c39859c40631f277cb9db7ae157687f468c42e18dd7308227c1bac58d71a744"} Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.816915 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerStarted","Data":"cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111"} Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.038162 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.052171 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-sb\") pod \"91b4be5e-0f8c-495e-869d-38a047276f33\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.052282 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-config\") pod \"91b4be5e-0f8c-495e-869d-38a047276f33\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.052375 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-svc\") pod \"91b4be5e-0f8c-495e-869d-38a047276f33\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.052417 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-swift-storage-0\") pod \"91b4be5e-0f8c-495e-869d-38a047276f33\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.052462 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k47b\" (UniqueName: \"kubernetes.io/projected/91b4be5e-0f8c-495e-869d-38a047276f33-kube-api-access-8k47b\") pod \"91b4be5e-0f8c-495e-869d-38a047276f33\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.052518 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-nb\") pod \"91b4be5e-0f8c-495e-869d-38a047276f33\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.069448 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b4be5e-0f8c-495e-869d-38a047276f33-kube-api-access-8k47b" (OuterVolumeSpecName: "kube-api-access-8k47b") pod "91b4be5e-0f8c-495e-869d-38a047276f33" (UID: "91b4be5e-0f8c-495e-869d-38a047276f33"). InnerVolumeSpecName "kube-api-access-8k47b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.125141 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "91b4be5e-0f8c-495e-869d-38a047276f33" (UID: "91b4be5e-0f8c-495e-869d-38a047276f33"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.154942 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.155394 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k47b\" (UniqueName: \"kubernetes.io/projected/91b4be5e-0f8c-495e-869d-38a047276f33-kube-api-access-8k47b\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.166223 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "91b4be5e-0f8c-495e-869d-38a047276f33" (UID: "91b4be5e-0f8c-495e-869d-38a047276f33"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.168621 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "91b4be5e-0f8c-495e-869d-38a047276f33" (UID: "91b4be5e-0f8c-495e-869d-38a047276f33"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.178794 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-config" (OuterVolumeSpecName: "config") pod "91b4be5e-0f8c-495e-869d-38a047276f33" (UID: "91b4be5e-0f8c-495e-869d-38a047276f33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.195832 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91b4be5e-0f8c-495e-869d-38a047276f33" (UID: "91b4be5e-0f8c-495e-869d-38a047276f33"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.259461 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.259513 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.259528 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.259539 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.342578 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5f7496d4bd-26fnt"] Jan 28 11:42:06 crc kubenswrapper[4804]: E0128 11:42:06.343052 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" containerName="barbican-db-sync" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.343072 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" containerName="barbican-db-sync" Jan 28 11:42:06 crc kubenswrapper[4804]: E0128 11:42:06.343111 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" containerName="init" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.343119 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" containerName="init" Jan 28 11:42:06 crc kubenswrapper[4804]: E0128 11:42:06.343131 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" containerName="dnsmasq-dns" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.343139 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" containerName="dnsmasq-dns" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.343359 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" containerName="dnsmasq-dns" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.343394 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" containerName="barbican-db-sync" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.344684 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.351183 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8f675b957-rm9qp"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.352871 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.354840 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.355095 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rvw8m" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.355223 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.356161 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.362347 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f7496d4bd-26fnt"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.364438 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dff5q\" (UniqueName: \"kubernetes.io/projected/82ef8b43-de59-45f8-9c2a-765c5709054b-kube-api-access-dff5q\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.364483 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-combined-ca-bundle\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.364511 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ef8b43-de59-45f8-9c2a-765c5709054b-logs\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.364562 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.364597 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data-custom\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.377579 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8f675b957-rm9qp"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466393 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466433 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jlrd\" (UniqueName: \"kubernetes.io/projected/878daeff-34bf-4dab-8118-e42c318849bb-kube-api-access-4jlrd\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466474 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dff5q\" (UniqueName: \"kubernetes.io/projected/82ef8b43-de59-45f8-9c2a-765c5709054b-kube-api-access-dff5q\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466506 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-combined-ca-bundle\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466530 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ef8b43-de59-45f8-9c2a-765c5709054b-logs\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466573 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-combined-ca-bundle\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466592 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466622 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data-custom\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466657 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data-custom\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466689 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878daeff-34bf-4dab-8118-e42c318849bb-logs\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.468414 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ef8b43-de59-45f8-9c2a-765c5709054b-logs\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.481033 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.488703 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-combined-ca-bundle\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.497378 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dff5q\" (UniqueName: \"kubernetes.io/projected/82ef8b43-de59-45f8-9c2a-765c5709054b-kube-api-access-dff5q\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.498613 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data-custom\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.545666 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-m7bk5"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.547642 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.569924 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-m7bk5"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570042 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570088 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570111 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jlrd\" (UniqueName: \"kubernetes.io/projected/878daeff-34bf-4dab-8118-e42c318849bb-kube-api-access-4jlrd\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570177 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570201 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-combined-ca-bundle\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570221 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj625\" (UniqueName: \"kubernetes.io/projected/7da1add4-521f-473c-8694-ccecf71fce93-kube-api-access-hj625\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570248 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-config\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570287 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570308 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data-custom\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570331 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-svc\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570355 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878daeff-34bf-4dab-8118-e42c318849bb-logs\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.572362 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878daeff-34bf-4dab-8118-e42c318849bb-logs\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.580733 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.591545 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-combined-ca-bundle\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.599302 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data-custom\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.628038 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jlrd\" (UniqueName: \"kubernetes.io/projected/878daeff-34bf-4dab-8118-e42c318849bb-kube-api-access-4jlrd\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.628786 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.662657 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.672398 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.672480 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj625\" (UniqueName: \"kubernetes.io/projected/7da1add4-521f-473c-8694-ccecf71fce93-kube-api-access-hj625\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.672524 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-config\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.672588 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.672623 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-svc\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.672680 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.673836 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.682643 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-config\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.683138 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-svc\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.683798 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.684515 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.717642 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj625\" (UniqueName: \"kubernetes.io/projected/7da1add4-521f-473c-8694-ccecf71fce93-kube-api-access-hj625\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.725298 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58c46c5cc8-bpsgv"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.727301 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.736030 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.783101 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data-custom\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.783180 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th556\" (UniqueName: \"kubernetes.io/projected/96c46652-7506-4118-a507-a5f2b6668c78-kube-api-access-th556\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.783292 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-combined-ca-bundle\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.783380 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c46652-7506-4118-a507-a5f2b6668c78-logs\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.783499 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.791130 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58c46c5cc8-bpsgv"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.885035 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c46652-7506-4118-a507-a5f2b6668c78-logs\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.885126 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.885169 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data-custom\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.885191 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th556\" (UniqueName: \"kubernetes.io/projected/96c46652-7506-4118-a507-a5f2b6668c78-kube-api-access-th556\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.885247 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-combined-ca-bundle\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.886512 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c46652-7506-4118-a507-a5f2b6668c78-logs\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.890585 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data-custom\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.896568 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-combined-ca-bundle\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.897286 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.913753 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f885d959c-vhjh4" event={"ID":"4efe85dc-b64c-4cbe-83f7-89fa462a95a0","Type":"ContainerStarted","Data":"31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3"} Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.933560 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th556\" (UniqueName: \"kubernetes.io/projected/96c46652-7506-4118-a507-a5f2b6668c78-kube-api-access-th556\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.937627 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.955474 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659f7cffd6-wm9cj" event={"ID":"280cd1a0-6761-425c-8de1-bec2307ba0c0","Type":"ContainerStarted","Data":"2bc2f4bef5b6e11721d8eabaa519e6625f7ff953fd015c6be0cebef1e6ec65fa"} Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.955524 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659f7cffd6-wm9cj" event={"ID":"280cd1a0-6761-425c-8de1-bec2307ba0c0","Type":"ContainerStarted","Data":"54143a992b19966c4a0488e9860a42a6b4166527e948b9a61cc651fb19353896"} Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.956454 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.956493 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.968626 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" event={"ID":"91b4be5e-0f8c-495e-869d-38a047276f33","Type":"ContainerDied","Data":"4daa812f368862258a1d55a15b7d75718ffc99b127c66500a75ea826f368eb02"} Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.978138 4804 scope.go:117] "RemoveContainer" containerID="793e56501f603507de08462d6163102c6e75fc7f6d8874ef3f2a6c93cde5476a" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.976572 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.995122 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6f885d959c-vhjh4" podStartSLOduration=5.99510069 podStartE2EDuration="5.99510069s" podCreationTimestamp="2026-01-28 11:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:06.964369872 +0000 UTC m=+1202.759249856" watchObservedRunningTime="2026-01-28 11:42:06.99510069 +0000 UTC m=+1202.789980674" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.009177 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.039408 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-659f7cffd6-wm9cj" podStartSLOduration=3.039387009 podStartE2EDuration="3.039387009s" podCreationTimestamp="2026-01-28 11:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:07.010474259 +0000 UTC m=+1202.805354243" watchObservedRunningTime="2026-01-28 11:42:07.039387009 +0000 UTC m=+1202.834266993" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.051534 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-85r5r"] Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.084819 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-85r5r"] Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.118370 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.201477 4804 scope.go:117] "RemoveContainer" containerID="653ef14818c2af14b35bf5c8eff2142bb2b6b74279ede6a70a0def4afe23f6e5" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.237266 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f7496d4bd-26fnt"] Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.424070 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.433448 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.488595 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.504193 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.544588 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8f675b957-rm9qp"] Jan 28 11:42:07 crc kubenswrapper[4804]: W0128 11:42:07.549176 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod878daeff_34bf_4dab_8118_e42c318849bb.slice/crio-a5da49e2449f72f707c273e1370e4a7b62de12d82629f0770fb413435e71898d WatchSource:0}: Error finding container a5da49e2449f72f707c273e1370e4a7b62de12d82629f0770fb413435e71898d: Status 404 returned error can't find the container with id a5da49e2449f72f707c273e1370e4a7b62de12d82629f0770fb413435e71898d Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.830533 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-m7bk5"] Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.921434 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58c46c5cc8-bpsgv"] Jan 28 11:42:07 crc kubenswrapper[4804]: W0128 11:42:07.934781 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96c46652_7506_4118_a507_a5f2b6668c78.slice/crio-372d567d59c59f91cd799085e6762b25e3f3df3f1a5319b2592a5c634b618a46 WatchSource:0}: Error finding container 372d567d59c59f91cd799085e6762b25e3f3df3f1a5319b2592a5c634b618a46: Status 404 returned error can't find the container with id 372d567d59c59f91cd799085e6762b25e3f3df3f1a5319b2592a5c634b618a46 Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.988264 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2swjk" event={"ID":"3bd4fedc-8940-48ad-b718-4fbb98e48bf0","Type":"ContainerStarted","Data":"dc599447325170297407d10ffc4cdfee6dcb5608ba938fdf91f777cfd7556821"} Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.989806 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58c46c5cc8-bpsgv" event={"ID":"96c46652-7506-4118-a507-a5f2b6668c78","Type":"ContainerStarted","Data":"372d567d59c59f91cd799085e6762b25e3f3df3f1a5319b2592a5c634b618a46"} Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.993135 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" event={"ID":"7da1add4-521f-473c-8694-ccecf71fce93","Type":"ContainerStarted","Data":"613d25f46f67af98ce70f3f5abf8d934501d6069e147f6af856f94fa63cd3fb2"} Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.994790 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f675b957-rm9qp" event={"ID":"878daeff-34bf-4dab-8118-e42c318849bb","Type":"ContainerStarted","Data":"a5da49e2449f72f707c273e1370e4a7b62de12d82629f0770fb413435e71898d"} Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.997005 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" event={"ID":"82ef8b43-de59-45f8-9c2a-765c5709054b","Type":"ContainerStarted","Data":"0556907b161f5a19bd7e76c946764eabb51dab90af80f30118fa8d78582a879a"} Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.997472 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.997489 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:08 crc kubenswrapper[4804]: I0128 11:42:08.012989 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2swjk" podStartSLOduration=3.10179661 podStartE2EDuration="48.012968356s" podCreationTimestamp="2026-01-28 11:41:20 +0000 UTC" firstStartedPulling="2026-01-28 11:41:21.484412843 +0000 UTC m=+1157.279292827" lastFinishedPulling="2026-01-28 11:42:06.395584589 +0000 UTC m=+1202.190464573" observedRunningTime="2026-01-28 11:42:08.004901799 +0000 UTC m=+1203.799781793" watchObservedRunningTime="2026-01-28 11:42:08.012968356 +0000 UTC m=+1203.807848340" Jan 28 11:42:08 crc kubenswrapper[4804]: I0128 11:42:08.940344 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" path="/var/lib/kubelet/pods/91b4be5e-0f8c-495e-869d-38a047276f33/volumes" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.024962 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58c46c5cc8-bpsgv" event={"ID":"96c46652-7506-4118-a507-a5f2b6668c78","Type":"ContainerStarted","Data":"8d2ae7c993c91ad0905eca59b162f3cd21ce7469700e79350f3b9f5b4056fdb6"} Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.025007 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58c46c5cc8-bpsgv" event={"ID":"96c46652-7506-4118-a507-a5f2b6668c78","Type":"ContainerStarted","Data":"c025f7a64dc8e48661b9d0d9b892d691c7104b5918c48e37213587b764f98ac2"} Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.026504 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.026583 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.034415 4804 generic.go:334] "Generic (PLEG): container finished" podID="7da1add4-521f-473c-8694-ccecf71fce93" containerID="e95cc363dac842375743e8314956bb8d9f168054cc0e4b1f83fe0a24457640be" exitCode=0 Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.035571 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" event={"ID":"7da1add4-521f-473c-8694-ccecf71fce93","Type":"ContainerDied","Data":"e95cc363dac842375743e8314956bb8d9f168054cc0e4b1f83fe0a24457640be"} Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.058892 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58c46c5cc8-bpsgv" podStartSLOduration=3.058860474 podStartE2EDuration="3.058860474s" podCreationTimestamp="2026-01-28 11:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:09.051348855 +0000 UTC m=+1204.846228839" watchObservedRunningTime="2026-01-28 11:42:09.058860474 +0000 UTC m=+1204.853740458" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.670254 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7bd5b5bf44-5z4wx"] Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.672435 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.674534 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.675051 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.689702 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bd5b5bf44-5z4wx"] Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801015 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801066 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-public-tls-certs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801086 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkk99\" (UniqueName: \"kubernetes.io/projected/bb3c1e4d-637e-4de6-aa37-7daff5298b30-kube-api-access-qkk99\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801109 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-internal-tls-certs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801180 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data-custom\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801208 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3c1e4d-637e-4de6-aa37-7daff5298b30-logs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801282 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-combined-ca-bundle\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.904987 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data-custom\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.905048 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3c1e4d-637e-4de6-aa37-7daff5298b30-logs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.905200 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-combined-ca-bundle\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.905852 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3c1e4d-637e-4de6-aa37-7daff5298b30-logs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.906116 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.906164 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-public-tls-certs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.906183 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkk99\" (UniqueName: \"kubernetes.io/projected/bb3c1e4d-637e-4de6-aa37-7daff5298b30-kube-api-access-qkk99\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.906220 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-internal-tls-certs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.909853 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-internal-tls-certs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.911723 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-combined-ca-bundle\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.919699 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-public-tls-certs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.922216 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data-custom\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.922568 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkk99\" (UniqueName: \"kubernetes.io/projected/bb3c1e4d-637e-4de6-aa37-7daff5298b30-kube-api-access-qkk99\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.922741 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.045492 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.061116 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" event={"ID":"7da1add4-521f-473c-8694-ccecf71fce93","Type":"ContainerStarted","Data":"a14505e3f8f6847755a9c7ba2c0dbd679286c5c859ffe247846891adcf951765"} Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.061283 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.061299 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.062522 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.086445 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" podStartSLOduration=4.08642703 podStartE2EDuration="4.08642703s" podCreationTimestamp="2026-01-28 11:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:10.079456188 +0000 UTC m=+1205.874336172" watchObservedRunningTime="2026-01-28 11:42:10.08642703 +0000 UTC m=+1205.881307004" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.527939 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.546637 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:11 crc kubenswrapper[4804]: I0128 11:42:11.068620 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f675b957-rm9qp" event={"ID":"878daeff-34bf-4dab-8118-e42c318849bb","Type":"ContainerStarted","Data":"f4726c69a403b9a8eefc4f17886ef00a383e10ea26adf572bdfed7ea1d3723a8"} Jan 28 11:42:11 crc kubenswrapper[4804]: I0128 11:42:11.070674 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" event={"ID":"82ef8b43-de59-45f8-9c2a-765c5709054b","Type":"ContainerStarted","Data":"1fe685e535efd281a9b4cf9713641d9161c23425d8abe0134248a2395c6b7208"} Jan 28 11:42:11 crc kubenswrapper[4804]: I0128 11:42:11.197378 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bd5b5bf44-5z4wx"] Jan 28 11:42:12 crc kubenswrapper[4804]: I0128 11:42:12.084851 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" event={"ID":"82ef8b43-de59-45f8-9c2a-765c5709054b","Type":"ContainerStarted","Data":"bcbdcf39ea5a39e34418c6ab9208339d9f7fde2eca3c37cbb5806710252cf88b"} Jan 28 11:42:12 crc kubenswrapper[4804]: I0128 11:42:12.106210 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" podStartSLOduration=2.783773209 podStartE2EDuration="6.106184234s" podCreationTimestamp="2026-01-28 11:42:06 +0000 UTC" firstStartedPulling="2026-01-28 11:42:07.307159392 +0000 UTC m=+1203.102039376" lastFinishedPulling="2026-01-28 11:42:10.629570417 +0000 UTC m=+1206.424450401" observedRunningTime="2026-01-28 11:42:12.099373897 +0000 UTC m=+1207.894253891" watchObservedRunningTime="2026-01-28 11:42:12.106184234 +0000 UTC m=+1207.901064218" Jan 28 11:42:15 crc kubenswrapper[4804]: I0128 11:42:15.107861 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f675b957-rm9qp" event={"ID":"878daeff-34bf-4dab-8118-e42c318849bb","Type":"ContainerStarted","Data":"1144f29504fe6195fc342eb320a4b830871f3e9d0216c4ce9fc167121dce473e"} Jan 28 11:42:15 crc kubenswrapper[4804]: I0128 11:42:15.131850 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8f675b957-rm9qp" podStartSLOduration=6.140046342 podStartE2EDuration="9.131831074s" podCreationTimestamp="2026-01-28 11:42:06 +0000 UTC" firstStartedPulling="2026-01-28 11:42:07.551357024 +0000 UTC m=+1203.346237008" lastFinishedPulling="2026-01-28 11:42:10.543141756 +0000 UTC m=+1206.338021740" observedRunningTime="2026-01-28 11:42:15.128801357 +0000 UTC m=+1210.923681341" watchObservedRunningTime="2026-01-28 11:42:15.131831074 +0000 UTC m=+1210.926711068" Jan 28 11:42:15 crc kubenswrapper[4804]: W0128 11:42:15.327812 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb3c1e4d_637e_4de6_aa37_7daff5298b30.slice/crio-545d9d7c89cb4fb5f1b3a7bdef9c710109a6c3aca89e779fe23e0a1c510a7627 WatchSource:0}: Error finding container 545d9d7c89cb4fb5f1b3a7bdef9c710109a6c3aca89e779fe23e0a1c510a7627: Status 404 returned error can't find the container with id 545d9d7c89cb4fb5f1b3a7bdef9c710109a6c3aca89e779fe23e0a1c510a7627 Jan 28 11:42:16 crc kubenswrapper[4804]: I0128 11:42:16.117073 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" event={"ID":"bb3c1e4d-637e-4de6-aa37-7daff5298b30","Type":"ContainerStarted","Data":"545d9d7c89cb4fb5f1b3a7bdef9c710109a6c3aca89e779fe23e0a1c510a7627"} Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.017254 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.083703 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cthxz"] Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.083985 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerName="dnsmasq-dns" containerID="cri-o://619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6" gracePeriod=10 Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.132848 4804 generic.go:334] "Generic (PLEG): container finished" podID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" containerID="dc599447325170297407d10ffc4cdfee6dcb5608ba938fdf91f777cfd7556821" exitCode=0 Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.132985 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2swjk" event={"ID":"3bd4fedc-8940-48ad-b718-4fbb98e48bf0","Type":"ContainerDied","Data":"dc599447325170297407d10ffc4cdfee6dcb5608ba938fdf91f777cfd7556821"} Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.139465 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerStarted","Data":"26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79"} Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.139695 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-central-agent" containerID="cri-o://353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346" gracePeriod=30 Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.140015 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.140086 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="proxy-httpd" containerID="cri-o://26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79" gracePeriod=30 Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.140162 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="sg-core" containerID="cri-o://cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111" gracePeriod=30 Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.140205 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-notification-agent" containerID="cri-o://4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7" gracePeriod=30 Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.156562 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" event={"ID":"bb3c1e4d-637e-4de6-aa37-7daff5298b30","Type":"ContainerStarted","Data":"8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e"} Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.156610 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" event={"ID":"bb3c1e4d-637e-4de6-aa37-7daff5298b30","Type":"ContainerStarted","Data":"55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389"} Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.157011 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.157165 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.191822 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.600494285 podStartE2EDuration="57.191795828s" podCreationTimestamp="2026-01-28 11:41:20 +0000 UTC" firstStartedPulling="2026-01-28 11:41:21.773526045 +0000 UTC m=+1157.568406029" lastFinishedPulling="2026-01-28 11:42:16.364827598 +0000 UTC m=+1212.159707572" observedRunningTime="2026-01-28 11:42:17.185314982 +0000 UTC m=+1212.980194976" watchObservedRunningTime="2026-01-28 11:42:17.191795828 +0000 UTC m=+1212.986675812" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.228354 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" podStartSLOduration=8.228328521 podStartE2EDuration="8.228328521s" podCreationTimestamp="2026-01-28 11:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:17.219210611 +0000 UTC m=+1213.014090595" watchObservedRunningTime="2026-01-28 11:42:17.228328521 +0000 UTC m=+1213.023208505" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.711109 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.844377 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-sb\") pod \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.844997 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-config\") pod \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.845167 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r99bj\" (UniqueName: \"kubernetes.io/projected/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-kube-api-access-r99bj\") pod \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.845812 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-swift-storage-0\") pod \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.846217 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-nb\") pod \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.846320 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-svc\") pod \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.863103 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-kube-api-access-r99bj" (OuterVolumeSpecName: "kube-api-access-r99bj") pod "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" (UID: "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4"). InnerVolumeSpecName "kube-api-access-r99bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.891006 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" (UID: "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.899350 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" (UID: "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.900254 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-config" (OuterVolumeSpecName: "config") pod "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" (UID: "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.905484 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" (UID: "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.906179 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" (UID: "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.948713 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.948763 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r99bj\" (UniqueName: \"kubernetes.io/projected/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-kube-api-access-r99bj\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.948774 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.948783 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.948792 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.948802 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.173081 4804 generic.go:334] "Generic (PLEG): container finished" podID="559981d5-7d2e-4624-a425-53ff3158840a" containerID="26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79" exitCode=0 Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.173111 4804 generic.go:334] "Generic (PLEG): container finished" podID="559981d5-7d2e-4624-a425-53ff3158840a" containerID="cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111" exitCode=2 Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.173119 4804 generic.go:334] "Generic (PLEG): container finished" podID="559981d5-7d2e-4624-a425-53ff3158840a" containerID="353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346" exitCode=0 Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.173178 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerDied","Data":"26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79"} Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.173240 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerDied","Data":"cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111"} Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.173273 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerDied","Data":"353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346"} Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.175713 4804 generic.go:334] "Generic (PLEG): container finished" podID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerID="619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6" exitCode=0 Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.175820 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.175874 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" event={"ID":"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4","Type":"ContainerDied","Data":"619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6"} Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.175942 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" event={"ID":"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4","Type":"ContainerDied","Data":"712240faee5573c11cecd4f774dbe7152151ce4aa8c358cabe00675975fd0077"} Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.175971 4804 scope.go:117] "RemoveContainer" containerID="619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.205069 4804 scope.go:117] "RemoveContainer" containerID="988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.215175 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cthxz"] Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.222687 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cthxz"] Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.228169 4804 scope.go:117] "RemoveContainer" containerID="619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6" Jan 28 11:42:18 crc kubenswrapper[4804]: E0128 11:42:18.231517 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6\": container with ID starting with 619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6 not found: ID does not exist" containerID="619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.231569 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6"} err="failed to get container status \"619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6\": rpc error: code = NotFound desc = could not find container \"619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6\": container with ID starting with 619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6 not found: ID does not exist" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.231603 4804 scope.go:117] "RemoveContainer" containerID="988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e" Jan 28 11:42:18 crc kubenswrapper[4804]: E0128 11:42:18.233849 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e\": container with ID starting with 988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e not found: ID does not exist" containerID="988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.233954 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e"} err="failed to get container status \"988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e\": rpc error: code = NotFound desc = could not find container \"988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e\": container with ID starting with 988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e not found: ID does not exist" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.538910 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2swjk" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.659564 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-db-sync-config-data\") pod \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.659644 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v9bm\" (UniqueName: \"kubernetes.io/projected/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-kube-api-access-5v9bm\") pod \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.659666 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-etc-machine-id\") pod \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.659747 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-config-data\") pod \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.659775 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-scripts\") pod \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.659843 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-combined-ca-bundle\") pod \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.660112 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3bd4fedc-8940-48ad-b718-4fbb98e48bf0" (UID: "3bd4fedc-8940-48ad-b718-4fbb98e48bf0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.661478 4804 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.665916 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-scripts" (OuterVolumeSpecName: "scripts") pod "3bd4fedc-8940-48ad-b718-4fbb98e48bf0" (UID: "3bd4fedc-8940-48ad-b718-4fbb98e48bf0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.665965 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-kube-api-access-5v9bm" (OuterVolumeSpecName: "kube-api-access-5v9bm") pod "3bd4fedc-8940-48ad-b718-4fbb98e48bf0" (UID: "3bd4fedc-8940-48ad-b718-4fbb98e48bf0"). InnerVolumeSpecName "kube-api-access-5v9bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.685258 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3bd4fedc-8940-48ad-b718-4fbb98e48bf0" (UID: "3bd4fedc-8940-48ad-b718-4fbb98e48bf0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.699857 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bd4fedc-8940-48ad-b718-4fbb98e48bf0" (UID: "3bd4fedc-8940-48ad-b718-4fbb98e48bf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.734006 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.743437 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-config-data" (OuterVolumeSpecName: "config-data") pod "3bd4fedc-8940-48ad-b718-4fbb98e48bf0" (UID: "3bd4fedc-8940-48ad-b718-4fbb98e48bf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.766213 4804 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.766267 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v9bm\" (UniqueName: \"kubernetes.io/projected/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-kube-api-access-5v9bm\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.766290 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.766320 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.766340 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867273 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44xv5\" (UniqueName: \"kubernetes.io/projected/559981d5-7d2e-4624-a425-53ff3158840a-kube-api-access-44xv5\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867365 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-run-httpd\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867466 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-config-data\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867540 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-scripts\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867574 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-log-httpd\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867596 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-combined-ca-bundle\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867628 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-sg-core-conf-yaml\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.868712 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.869256 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.871802 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-scripts" (OuterVolumeSpecName: "scripts") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.872436 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/559981d5-7d2e-4624-a425-53ff3158840a-kube-api-access-44xv5" (OuterVolumeSpecName: "kube-api-access-44xv5") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "kube-api-access-44xv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.896265 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.924042 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" path="/var/lib/kubelet/pods/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4/volumes" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.932546 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.942224 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.969384 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.969409 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.969419 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.969436 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44xv5\" (UniqueName: \"kubernetes.io/projected/559981d5-7d2e-4624-a425-53ff3158840a-kube-api-access-44xv5\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.969445 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.969454 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.980404 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-config-data" (OuterVolumeSpecName: "config-data") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.998852 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.070842 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.198043 4804 generic.go:334] "Generic (PLEG): container finished" podID="559981d5-7d2e-4624-a425-53ff3158840a" containerID="4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7" exitCode=0 Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.198919 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerDied","Data":"4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7"} Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.198960 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerDied","Data":"fa6e1a12eec8f670dacaf476eeccb44cad0c7ce79723abf8463004426598a522"} Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.199006 4804 scope.go:117] "RemoveContainer" containerID="26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.199395 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.202227 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2swjk" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.202286 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2swjk" event={"ID":"3bd4fedc-8940-48ad-b718-4fbb98e48bf0","Type":"ContainerDied","Data":"cd5a1fb1b75f267a6c5725321d259dcf2acd5836e7aa0491855baf75e38ef9de"} Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.202320 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd5a1fb1b75f267a6c5725321d259dcf2acd5836e7aa0491855baf75e38ef9de" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.223147 4804 scope.go:117] "RemoveContainer" containerID="cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.267452 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.283549 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.296511 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.296897 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-notification-agent" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.296913 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-notification-agent" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.296931 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" containerName="cinder-db-sync" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.296938 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" containerName="cinder-db-sync" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.296951 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerName="dnsmasq-dns" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.296956 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerName="dnsmasq-dns" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.296967 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="sg-core" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.296972 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="sg-core" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.296989 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-central-agent" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.296994 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-central-agent" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.297012 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerName="init" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297018 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerName="init" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.297037 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="proxy-httpd" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297044 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="proxy-httpd" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297201 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="sg-core" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297211 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerName="dnsmasq-dns" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297226 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-notification-agent" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297239 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-central-agent" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297249 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" containerName="cinder-db-sync" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297259 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="proxy-httpd" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.298803 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.304146 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.304317 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.306134 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.345307 4804 scope.go:117] "RemoveContainer" containerID="4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377415 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-config-data\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377513 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377555 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377590 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-scripts\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377624 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-log-httpd\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377641 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88vrp\" (UniqueName: \"kubernetes.io/projected/bf5a35f4-0777-4b67-978a-ce8ab97000d4-kube-api-access-88vrp\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377671 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-run-httpd\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.411739 4804 scope.go:117] "RemoveContainer" containerID="353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.475122 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.476549 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479553 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479608 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479640 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-scripts\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479678 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-log-httpd\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479696 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88vrp\" (UniqueName: \"kubernetes.io/projected/bf5a35f4-0777-4b67-978a-ce8ab97000d4-kube-api-access-88vrp\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479728 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-run-httpd\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479781 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-config-data\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.481505 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-log-httpd\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.481643 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-run-httpd\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.484023 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.484165 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.484248 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-p4q8k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.484320 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.489061 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-scripts\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.489984 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-config-data\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.501968 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.536061 4804 scope.go:117] "RemoveContainer" containerID="26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.544025 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79\": container with ID starting with 26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79 not found: ID does not exist" containerID="26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.544069 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79"} err="failed to get container status \"26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79\": rpc error: code = NotFound desc = could not find container \"26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79\": container with ID starting with 26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79 not found: ID does not exist" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.544095 4804 scope.go:117] "RemoveContainer" containerID="cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.544952 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111\": container with ID starting with cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111 not found: ID does not exist" containerID="cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.544971 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111"} err="failed to get container status \"cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111\": rpc error: code = NotFound desc = could not find container \"cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111\": container with ID starting with cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111 not found: ID does not exist" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.544986 4804 scope.go:117] "RemoveContainer" containerID="4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.548999 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7\": container with ID starting with 4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7 not found: ID does not exist" containerID="4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.549043 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7"} err="failed to get container status \"4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7\": rpc error: code = NotFound desc = could not find container \"4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7\": container with ID starting with 4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7 not found: ID does not exist" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.549077 4804 scope.go:117] "RemoveContainer" containerID="353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.572014 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346\": container with ID starting with 353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346 not found: ID does not exist" containerID="353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.572080 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346"} err="failed to get container status \"353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346\": rpc error: code = NotFound desc = could not find container \"353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346\": container with ID starting with 353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346 not found: ID does not exist" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.589680 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.590568 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.590845 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.591100 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.591226 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e213f7b0-f3b8-45f6-b965-ed909114500f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.591250 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.591376 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckd7l\" (UniqueName: \"kubernetes.io/projected/e213f7b0-f3b8-45f6-b965-ed909114500f-kube-api-access-ckd7l\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.591424 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.609642 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88vrp\" (UniqueName: \"kubernetes.io/projected/bf5a35f4-0777-4b67-978a-ce8ab97000d4-kube-api-access-88vrp\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.661058 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-kzz4k"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.663056 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.694715 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckd7l\" (UniqueName: \"kubernetes.io/projected/e213f7b0-f3b8-45f6-b965-ed909114500f-kube-api-access-ckd7l\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.694783 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.694839 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.694945 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.694996 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e213f7b0-f3b8-45f6-b965-ed909114500f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.695015 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.696988 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e213f7b0-f3b8-45f6-b965-ed909114500f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.697560 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.705307 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-kzz4k"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.706460 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.735130 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.736542 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.736977 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.748520 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckd7l\" (UniqueName: \"kubernetes.io/projected/e213f7b0-f3b8-45f6-b965-ed909114500f-kube-api-access-ckd7l\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.801010 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-config\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.801065 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.801087 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z7dm\" (UniqueName: \"kubernetes.io/projected/2b276638-3e05-4295-825f-321552970394-kube-api-access-8z7dm\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.801122 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.801147 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.801170 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.906264 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-config\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.906318 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.906339 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z7dm\" (UniqueName: \"kubernetes.io/projected/2b276638-3e05-4295-825f-321552970394-kube-api-access-8z7dm\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.906373 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.906400 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.906421 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.907492 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.908431 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-config\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.908954 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.909482 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.909766 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.959865 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.961655 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.963450 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z7dm\" (UniqueName: \"kubernetes.io/projected/2b276638-3e05-4295-825f-321552970394-kube-api-access-8z7dm\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.985334 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:19.993791 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:19.998422 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.081719 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110230 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9adb0b-6921-40d7-b50f-abc26763eaf5-logs\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110292 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110353 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data-custom\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110425 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc9adb0b-6921-40d7-b50f-abc26763eaf5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110450 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110502 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngx2\" (UniqueName: \"kubernetes.io/projected/fc9adb0b-6921-40d7-b50f-abc26763eaf5-kube-api-access-dngx2\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110588 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-scripts\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.215903 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data-custom\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.215966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc9adb0b-6921-40d7-b50f-abc26763eaf5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.215992 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.216008 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dngx2\" (UniqueName: \"kubernetes.io/projected/fc9adb0b-6921-40d7-b50f-abc26763eaf5-kube-api-access-dngx2\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.216029 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-scripts\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.216098 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9adb0b-6921-40d7-b50f-abc26763eaf5-logs\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.216123 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.216766 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc9adb0b-6921-40d7-b50f-abc26763eaf5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.219470 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.220038 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.220299 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9adb0b-6921-40d7-b50f-abc26763eaf5-logs\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.220698 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data-custom\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.222092 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-scripts\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.242443 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dngx2\" (UniqueName: \"kubernetes.io/projected/fc9adb0b-6921-40d7-b50f-abc26763eaf5-kube-api-access-dngx2\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.333348 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.365483 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.713663 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.749193 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.751690 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-kzz4k"] Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.944539 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="559981d5-7d2e-4624-a425-53ff3158840a" path="/var/lib/kubelet/pods/559981d5-7d2e-4624-a425-53ff3158840a/volumes" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.946079 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.015779 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6b8bbc97bf-dkp56"] Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.016045 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6b8bbc97bf-dkp56" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-api" containerID="cri-o://bf7528745919414ab5c0c5536eb5b3fc9885458114b18b78f9462eb6cff21f37" gracePeriod=30 Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.016642 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6b8bbc97bf-dkp56" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-httpd" containerID="cri-o://a77115c93ac5035e08ec037be345ec8297ca1f73ac611f0d1dfc69f51b156d7c" gracePeriod=30 Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.050491 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7d88fd9b89-w66bx"] Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.051803 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.082758 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6b8bbc97bf-dkp56" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": EOF" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.123804 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d88fd9b89-w66bx"] Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.231107 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerStarted","Data":"8eea3d4a522a6cf3f69074fc2cae25b852205b216d0f0630ee0a40145a648a1d"} Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.238012 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e213f7b0-f3b8-45f6-b965-ed909114500f","Type":"ContainerStarted","Data":"8cc18ae7a5ea3851e2236f7340657d254b4490ff0fc9f65580ef195204b81856"} Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.246967 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc9adb0b-6921-40d7-b50f-abc26763eaf5","Type":"ContainerStarted","Data":"5bf6ffe97daa495d639b23fe05e4c1895ce6b4f63d483ae138313a43d26164eb"} Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247024 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-httpd-config\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247053 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-ovndb-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247116 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9c5k\" (UniqueName: \"kubernetes.io/projected/095bc753-88c4-456c-a3ae-aa0040a76338-kube-api-access-q9c5k\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247152 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-combined-ca-bundle\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247187 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-public-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247206 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-internal-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247238 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-config\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.255376 4804 generic.go:334] "Generic (PLEG): container finished" podID="2b276638-3e05-4295-825f-321552970394" containerID="7d55e8f0ae30cf6b17f9255210f13d604f097d0227761c71497f25b925dfda5d" exitCode=0 Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.255413 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" event={"ID":"2b276638-3e05-4295-825f-321552970394","Type":"ContainerDied","Data":"7d55e8f0ae30cf6b17f9255210f13d604f097d0227761c71497f25b925dfda5d"} Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.255436 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" event={"ID":"2b276638-3e05-4295-825f-321552970394","Type":"ContainerStarted","Data":"5ac546ee98d5d28f78181c3225f300b9da32c9a6f7eeb78daa5bbc95aceb3b8d"} Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349278 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-public-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349331 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-internal-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349377 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-config\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349433 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-httpd-config\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349459 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-ovndb-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349540 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9c5k\" (UniqueName: \"kubernetes.io/projected/095bc753-88c4-456c-a3ae-aa0040a76338-kube-api-access-q9c5k\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349587 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-combined-ca-bundle\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.354958 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-combined-ca-bundle\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.358290 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-public-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.358439 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-internal-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.359084 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-config\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.365176 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-httpd-config\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.370474 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9c5k\" (UniqueName: \"kubernetes.io/projected/095bc753-88c4-456c-a3ae-aa0040a76338-kube-api-access-q9c5k\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.382797 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-ovndb-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.431603 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.268761 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerStarted","Data":"d53b727b65344f9dcbac7c3c08faebc9e0b148d77c02a88ad5ad99d02b9e26a0"} Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.282758 4804 generic.go:334] "Generic (PLEG): container finished" podID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerID="a77115c93ac5035e08ec037be345ec8297ca1f73ac611f0d1dfc69f51b156d7c" exitCode=0 Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.282831 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8bbc97bf-dkp56" event={"ID":"1f4e070e-7b0f-4a60-9383-7e1a61380fc6","Type":"ContainerDied","Data":"a77115c93ac5035e08ec037be345ec8297ca1f73ac611f0d1dfc69f51b156d7c"} Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.308131 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc9adb0b-6921-40d7-b50f-abc26763eaf5","Type":"ContainerStarted","Data":"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35"} Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.332169 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" event={"ID":"2b276638-3e05-4295-825f-321552970394","Type":"ContainerStarted","Data":"109925f98b98bd19bc310a0910c394cca6331ef46f59894bad1048ee96f57b9e"} Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.332644 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.376063 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" podStartSLOduration=3.376043912 podStartE2EDuration="3.376043912s" podCreationTimestamp="2026-01-28 11:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:22.373218422 +0000 UTC m=+1218.168098406" watchObservedRunningTime="2026-01-28 11:42:22.376043912 +0000 UTC m=+1218.170923896" Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.434486 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d88fd9b89-w66bx"] Jan 28 11:42:22 crc kubenswrapper[4804]: W0128 11:42:22.478011 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod095bc753_88c4_456c_a3ae_aa0040a76338.slice/crio-d66804f71c7164aff0af828551ca8929bfd4e365e7c25ea56443ca4b0d53463e WatchSource:0}: Error finding container d66804f71c7164aff0af828551ca8929bfd4e365e7c25ea56443ca4b0d53463e: Status 404 returned error can't find the container with id d66804f71c7164aff0af828551ca8929bfd4e365e7c25ea56443ca4b0d53463e Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.188357 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.341380 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc9adb0b-6921-40d7-b50f-abc26763eaf5","Type":"ContainerStarted","Data":"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9"} Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.341519 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.343657 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerStarted","Data":"2868fa37a5170a1df9710a1b7d3fc5cd16c8581466e759928b2c3aac1728ebdc"} Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.345931 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d88fd9b89-w66bx" event={"ID":"095bc753-88c4-456c-a3ae-aa0040a76338","Type":"ContainerStarted","Data":"5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe"} Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.345969 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d88fd9b89-w66bx" event={"ID":"095bc753-88c4-456c-a3ae-aa0040a76338","Type":"ContainerStarted","Data":"d66804f71c7164aff0af828551ca8929bfd4e365e7c25ea56443ca4b0d53463e"} Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.363038 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.363022785 podStartE2EDuration="4.363022785s" podCreationTimestamp="2026-01-28 11:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:23.360368161 +0000 UTC m=+1219.155248155" watchObservedRunningTime="2026-01-28 11:42:23.363022785 +0000 UTC m=+1219.157902769" Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.578308 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6b8bbc97bf-dkp56" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": dial tcp 10.217.0.154:9696: connect: connection refused" Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.354942 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e213f7b0-f3b8-45f6-b965-ed909114500f","Type":"ContainerStarted","Data":"6e63076fc85ddedcafa8e57fc77c689aa3dd692341cb3545cffb8e8c36341088"} Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.355307 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e213f7b0-f3b8-45f6-b965-ed909114500f","Type":"ContainerStarted","Data":"4a4325a7f87ad4e4d3dfec8dfb9484dd1be4c16f01570f585a783d57a6a9b20e"} Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.356428 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d88fd9b89-w66bx" event={"ID":"095bc753-88c4-456c-a3ae-aa0040a76338","Type":"ContainerStarted","Data":"789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f"} Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.356570 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.358289 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api-log" containerID="cri-o://5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35" gracePeriod=30 Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.358512 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerStarted","Data":"d9a024557230838bdc17acdb5049c5e8f2ec2a08a39b86af7c3f3e3797853125"} Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.358565 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api" containerID="cri-o://a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9" gracePeriod=30 Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.399113 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.566458553 podStartE2EDuration="5.399087611s" podCreationTimestamp="2026-01-28 11:42:19 +0000 UTC" firstStartedPulling="2026-01-28 11:42:20.726672076 +0000 UTC m=+1216.521552060" lastFinishedPulling="2026-01-28 11:42:22.559301134 +0000 UTC m=+1218.354181118" observedRunningTime="2026-01-28 11:42:24.388905307 +0000 UTC m=+1220.183785301" watchObservedRunningTime="2026-01-28 11:42:24.399087611 +0000 UTC m=+1220.193967595" Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.414569 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7d88fd9b89-w66bx" podStartSLOduration=3.414543542 podStartE2EDuration="3.414543542s" podCreationTimestamp="2026-01-28 11:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:24.408850371 +0000 UTC m=+1220.203730355" watchObservedRunningTime="2026-01-28 11:42:24.414543542 +0000 UTC m=+1220.209423526" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:24.999228 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.244541 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.369857 4804 generic.go:334] "Generic (PLEG): container finished" podID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerID="a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9" exitCode=0 Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.369926 4804 generic.go:334] "Generic (PLEG): container finished" podID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerID="5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35" exitCode=143 Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.371551 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.372136 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc9adb0b-6921-40d7-b50f-abc26763eaf5","Type":"ContainerDied","Data":"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9"} Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.372166 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc9adb0b-6921-40d7-b50f-abc26763eaf5","Type":"ContainerDied","Data":"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35"} Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.372181 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc9adb0b-6921-40d7-b50f-abc26763eaf5","Type":"ContainerDied","Data":"5bf6ffe97daa495d639b23fe05e4c1895ce6b4f63d483ae138313a43d26164eb"} Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.372216 4804 scope.go:117] "RemoveContainer" containerID="a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385150 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-combined-ca-bundle\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385229 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385273 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-scripts\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385325 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dngx2\" (UniqueName: \"kubernetes.io/projected/fc9adb0b-6921-40d7-b50f-abc26763eaf5-kube-api-access-dngx2\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385359 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc9adb0b-6921-40d7-b50f-abc26763eaf5-etc-machine-id\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385646 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9adb0b-6921-40d7-b50f-abc26763eaf5-logs\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385765 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data-custom\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.386971 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc9adb0b-6921-40d7-b50f-abc26763eaf5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.387252 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc9adb0b-6921-40d7-b50f-abc26763eaf5-logs" (OuterVolumeSpecName: "logs") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.395030 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.412334 4804 scope.go:117] "RemoveContainer" containerID="5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.416061 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9adb0b-6921-40d7-b50f-abc26763eaf5-kube-api-access-dngx2" (OuterVolumeSpecName: "kube-api-access-dngx2") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "kube-api-access-dngx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.416086 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-scripts" (OuterVolumeSpecName: "scripts") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.422009 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.466422 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data" (OuterVolumeSpecName: "config-data") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488308 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488350 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488378 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488395 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488407 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dngx2\" (UniqueName: \"kubernetes.io/projected/fc9adb0b-6921-40d7-b50f-abc26763eaf5-kube-api-access-dngx2\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488418 4804 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc9adb0b-6921-40d7-b50f-abc26763eaf5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488427 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9adb0b-6921-40d7-b50f-abc26763eaf5-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.594107 4804 scope.go:117] "RemoveContainer" containerID="a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9" Jan 28 11:42:25 crc kubenswrapper[4804]: E0128 11:42:25.600647 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9\": container with ID starting with a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9 not found: ID does not exist" containerID="a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.600692 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9"} err="failed to get container status \"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9\": rpc error: code = NotFound desc = could not find container \"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9\": container with ID starting with a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9 not found: ID does not exist" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.600721 4804 scope.go:117] "RemoveContainer" containerID="5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35" Jan 28 11:42:25 crc kubenswrapper[4804]: E0128 11:42:25.601650 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35\": container with ID starting with 5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35 not found: ID does not exist" containerID="5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.601716 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35"} err="failed to get container status \"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35\": rpc error: code = NotFound desc = could not find container \"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35\": container with ID starting with 5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35 not found: ID does not exist" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.601767 4804 scope.go:117] "RemoveContainer" containerID="a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.602287 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9"} err="failed to get container status \"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9\": rpc error: code = NotFound desc = could not find container \"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9\": container with ID starting with a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9 not found: ID does not exist" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.602311 4804 scope.go:117] "RemoveContainer" containerID="5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.604778 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35"} err="failed to get container status \"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35\": rpc error: code = NotFound desc = could not find container \"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35\": container with ID starting with 5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35 not found: ID does not exist" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.713334 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.730897 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.751566 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:25 crc kubenswrapper[4804]: E0128 11:42:25.752056 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api-log" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.752075 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api-log" Jan 28 11:42:25 crc kubenswrapper[4804]: E0128 11:42:25.752100 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.752107 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.752290 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.752316 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api-log" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.753277 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.755846 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.755998 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.756540 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.764227 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899362 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-scripts\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899443 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899479 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-public-tls-certs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899590 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899616 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc886c-66ef-4b91-87cf-1f9fe5de8081-etc-machine-id\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899665 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8jjj\" (UniqueName: \"kubernetes.io/projected/04cc886c-66ef-4b91-87cf-1f9fe5de8081-kube-api-access-k8jjj\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899762 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899810 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cc886c-66ef-4b91-87cf-1f9fe5de8081-logs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899839 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data-custom\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001443 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001492 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cc886c-66ef-4b91-87cf-1f9fe5de8081-logs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001524 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data-custom\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001551 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-scripts\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001578 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001607 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-public-tls-certs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001655 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001680 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc886c-66ef-4b91-87cf-1f9fe5de8081-etc-machine-id\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001695 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8jjj\" (UniqueName: \"kubernetes.io/projected/04cc886c-66ef-4b91-87cf-1f9fe5de8081-kube-api-access-k8jjj\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.003764 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc886c-66ef-4b91-87cf-1f9fe5de8081-etc-machine-id\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.003839 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cc886c-66ef-4b91-87cf-1f9fe5de8081-logs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.014380 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-public-tls-certs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.015755 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.015897 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data-custom\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.016375 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-scripts\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.020391 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.025661 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8jjj\" (UniqueName: \"kubernetes.io/projected/04cc886c-66ef-4b91-87cf-1f9fe5de8081-kube-api-access-k8jjj\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.029145 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.081605 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.824549 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.929548 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" path="/var/lib/kubelet/pods/fc9adb0b-6921-40d7-b50f-abc26763eaf5/volumes" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.997253 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.071626 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58c46c5cc8-bpsgv"] Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.078229 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58c46c5cc8-bpsgv" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api" containerID="cri-o://8d2ae7c993c91ad0905eca59b162f3cd21ce7469700e79350f3b9f5b4056fdb6" gracePeriod=30 Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.078627 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58c46c5cc8-bpsgv" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api-log" containerID="cri-o://c025f7a64dc8e48661b9d0d9b892d691c7104b5918c48e37213587b764f98ac2" gracePeriod=30 Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.134312 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.397653 4804 generic.go:334] "Generic (PLEG): container finished" podID="96c46652-7506-4118-a507-a5f2b6668c78" containerID="c025f7a64dc8e48661b9d0d9b892d691c7104b5918c48e37213587b764f98ac2" exitCode=143 Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.397927 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58c46c5cc8-bpsgv" event={"ID":"96c46652-7506-4118-a507-a5f2b6668c78","Type":"ContainerDied","Data":"c025f7a64dc8e48661b9d0d9b892d691c7104b5918c48e37213587b764f98ac2"} Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.401219 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerStarted","Data":"60911acc36e6a6a2e4395ed6f280d91b95f88a16dcea51245f1032e5c6e5c6d8"} Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.401452 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.404645 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04cc886c-66ef-4b91-87cf-1f9fe5de8081","Type":"ContainerStarted","Data":"6b06f838e59a73b485a69b93f766b0fb460afb06549c4aa004f7bac68fc724cc"} Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.423550 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.209883187 podStartE2EDuration="8.423524223s" podCreationTimestamp="2026-01-28 11:42:19 +0000 UTC" firstStartedPulling="2026-01-28 11:42:20.390020171 +0000 UTC m=+1216.184900145" lastFinishedPulling="2026-01-28 11:42:26.603661197 +0000 UTC m=+1222.398541181" observedRunningTime="2026-01-28 11:42:27.421151936 +0000 UTC m=+1223.216031920" watchObservedRunningTime="2026-01-28 11:42:27.423524223 +0000 UTC m=+1223.218404207" Jan 28 11:42:28 crc kubenswrapper[4804]: I0128 11:42:28.415188 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04cc886c-66ef-4b91-87cf-1f9fe5de8081","Type":"ContainerStarted","Data":"b7a5a299ea638aff1b67f737be31752dbc58e62ca2663d443117c669ac5e859a"} Jan 28 11:42:29 crc kubenswrapper[4804]: I0128 11:42:29.425528 4804 generic.go:334] "Generic (PLEG): container finished" podID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerID="bf7528745919414ab5c0c5536eb5b3fc9885458114b18b78f9462eb6cff21f37" exitCode=0 Jan 28 11:42:29 crc kubenswrapper[4804]: I0128 11:42:29.425625 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8bbc97bf-dkp56" event={"ID":"1f4e070e-7b0f-4a60-9383-7e1a61380fc6","Type":"ContainerDied","Data":"bf7528745919414ab5c0c5536eb5b3fc9885458114b18b78f9462eb6cff21f37"} Jan 28 11:42:29 crc kubenswrapper[4804]: I0128 11:42:29.428179 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04cc886c-66ef-4b91-87cf-1f9fe5de8081","Type":"ContainerStarted","Data":"7b625bd5e08a3fff2579118cb1bfb0f02e6d6eda2e30f536589d6d7b53d87774"} Jan 28 11:42:29 crc kubenswrapper[4804]: I0128 11:42:29.428359 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 11:42:29 crc kubenswrapper[4804]: I0128 11:42:29.453984 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.453962626 podStartE2EDuration="4.453962626s" podCreationTimestamp="2026-01-28 11:42:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:29.450060233 +0000 UTC m=+1225.244940217" watchObservedRunningTime="2026-01-28 11:42:29.453962626 +0000 UTC m=+1225.248842610" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.084027 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.153782 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-m7bk5"] Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.154054 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" podUID="7da1add4-521f-473c-8694-ccecf71fce93" containerName="dnsmasq-dns" containerID="cri-o://a14505e3f8f6847755a9c7ba2c0dbd679286c5c859ffe247846891adcf951765" gracePeriod=10 Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.163373 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.256047 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.309978 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.316611 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-internal-tls-certs\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.316729 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-public-tls-certs\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.316759 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-config\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.316860 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-httpd-config\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.316935 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-combined-ca-bundle\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.316964 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-ovndb-tls-certs\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.317608 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m86bw\" (UniqueName: \"kubernetes.io/projected/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-kube-api-access-m86bw\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.331579 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.334350 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-kube-api-access-m86bw" (OuterVolumeSpecName: "kube-api-access-m86bw") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "kube-api-access-m86bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.346622 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58c46c5cc8-bpsgv" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:36822->10.217.0.161:9311: read: connection reset by peer" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.346687 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58c46c5cc8-bpsgv" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:36824->10.217.0.161:9311: read: connection reset by peer" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.378988 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.380580 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-config" (OuterVolumeSpecName: "config") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.419837 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m86bw\" (UniqueName: \"kubernetes.io/projected/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-kube-api-access-m86bw\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.419872 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.419905 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.419919 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.420240 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.441439 4804 generic.go:334] "Generic (PLEG): container finished" podID="7da1add4-521f-473c-8694-ccecf71fce93" containerID="a14505e3f8f6847755a9c7ba2c0dbd679286c5c859ffe247846891adcf951765" exitCode=0 Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.441549 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" event={"ID":"7da1add4-521f-473c-8694-ccecf71fce93","Type":"ContainerDied","Data":"a14505e3f8f6847755a9c7ba2c0dbd679286c5c859ffe247846891adcf951765"} Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.446481 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8bbc97bf-dkp56" event={"ID":"1f4e070e-7b0f-4a60-9383-7e1a61380fc6","Type":"ContainerDied","Data":"17370b3c1885a4617f22d7c91afab2fb5a7fa1af0f912912598e79c6bd36be5b"} Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.446523 4804 scope.go:117] "RemoveContainer" containerID="a77115c93ac5035e08ec037be345ec8297ca1f73ac611f0d1dfc69f51b156d7c" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.446698 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.462946 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.463499 4804 generic.go:334] "Generic (PLEG): container finished" podID="96c46652-7506-4118-a507-a5f2b6668c78" containerID="8d2ae7c993c91ad0905eca59b162f3cd21ce7469700e79350f3b9f5b4056fdb6" exitCode=0 Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.463684 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="cinder-scheduler" containerID="cri-o://4a4325a7f87ad4e4d3dfec8dfb9484dd1be4c16f01570f585a783d57a6a9b20e" gracePeriod=30 Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.463994 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58c46c5cc8-bpsgv" event={"ID":"96c46652-7506-4118-a507-a5f2b6668c78","Type":"ContainerDied","Data":"8d2ae7c993c91ad0905eca59b162f3cd21ce7469700e79350f3b9f5b4056fdb6"} Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.465851 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="probe" containerID="cri-o://6e63076fc85ddedcafa8e57fc77c689aa3dd692341cb3545cffb8e8c36341088" gracePeriod=30 Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.479115 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.523121 4804 scope.go:117] "RemoveContainer" containerID="bf7528745919414ab5c0c5536eb5b3fc9885458114b18b78f9462eb6cff21f37" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.526111 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.526147 4804 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.526155 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.689421 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.787749 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6b8bbc97bf-dkp56"] Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.800119 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6b8bbc97bf-dkp56"] Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.831934 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-nb\") pod \"7da1add4-521f-473c-8694-ccecf71fce93\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.831992 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-sb\") pod \"7da1add4-521f-473c-8694-ccecf71fce93\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.832124 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-svc\") pod \"7da1add4-521f-473c-8694-ccecf71fce93\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.832167 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj625\" (UniqueName: \"kubernetes.io/projected/7da1add4-521f-473c-8694-ccecf71fce93-kube-api-access-hj625\") pod \"7da1add4-521f-473c-8694-ccecf71fce93\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.832254 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-config\") pod \"7da1add4-521f-473c-8694-ccecf71fce93\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.832273 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-swift-storage-0\") pod \"7da1add4-521f-473c-8694-ccecf71fce93\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.865150 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da1add4-521f-473c-8694-ccecf71fce93-kube-api-access-hj625" (OuterVolumeSpecName: "kube-api-access-hj625") pod "7da1add4-521f-473c-8694-ccecf71fce93" (UID: "7da1add4-521f-473c-8694-ccecf71fce93"). InnerVolumeSpecName "kube-api-access-hj625". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.908760 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7da1add4-521f-473c-8694-ccecf71fce93" (UID: "7da1add4-521f-473c-8694-ccecf71fce93"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.920013 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7da1add4-521f-473c-8694-ccecf71fce93" (UID: "7da1add4-521f-473c-8694-ccecf71fce93"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.934969 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" path="/var/lib/kubelet/pods/1f4e070e-7b0f-4a60-9383-7e1a61380fc6/volumes" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.936068 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.936088 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.936097 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj625\" (UniqueName: \"kubernetes.io/projected/7da1add4-521f-473c-8694-ccecf71fce93-kube-api-access-hj625\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.950458 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7da1add4-521f-473c-8694-ccecf71fce93" (UID: "7da1add4-521f-473c-8694-ccecf71fce93"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.955645 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7da1add4-521f-473c-8694-ccecf71fce93" (UID: "7da1add4-521f-473c-8694-ccecf71fce93"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.977033 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-config" (OuterVolumeSpecName: "config") pod "7da1add4-521f-473c-8694-ccecf71fce93" (UID: "7da1add4-521f-473c-8694-ccecf71fce93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.980635 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.040621 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.040661 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.040673 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.141579 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-combined-ca-bundle\") pod \"96c46652-7506-4118-a507-a5f2b6668c78\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.141899 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data\") pod \"96c46652-7506-4118-a507-a5f2b6668c78\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.141964 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c46652-7506-4118-a507-a5f2b6668c78-logs\") pod \"96c46652-7506-4118-a507-a5f2b6668c78\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.142107 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th556\" (UniqueName: \"kubernetes.io/projected/96c46652-7506-4118-a507-a5f2b6668c78-kube-api-access-th556\") pod \"96c46652-7506-4118-a507-a5f2b6668c78\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.142140 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data-custom\") pod \"96c46652-7506-4118-a507-a5f2b6668c78\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.143083 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96c46652-7506-4118-a507-a5f2b6668c78-logs" (OuterVolumeSpecName: "logs") pod "96c46652-7506-4118-a507-a5f2b6668c78" (UID: "96c46652-7506-4118-a507-a5f2b6668c78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.145143 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c46652-7506-4118-a507-a5f2b6668c78-kube-api-access-th556" (OuterVolumeSpecName: "kube-api-access-th556") pod "96c46652-7506-4118-a507-a5f2b6668c78" (UID: "96c46652-7506-4118-a507-a5f2b6668c78"). InnerVolumeSpecName "kube-api-access-th556". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.150053 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "96c46652-7506-4118-a507-a5f2b6668c78" (UID: "96c46652-7506-4118-a507-a5f2b6668c78"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.168221 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96c46652-7506-4118-a507-a5f2b6668c78" (UID: "96c46652-7506-4118-a507-a5f2b6668c78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.235426 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data" (OuterVolumeSpecName: "config-data") pod "96c46652-7506-4118-a507-a5f2b6668c78" (UID: "96c46652-7506-4118-a507-a5f2b6668c78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.244119 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.244160 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.244171 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c46652-7506-4118-a507-a5f2b6668c78-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.244182 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th556\" (UniqueName: \"kubernetes.io/projected/96c46652-7506-4118-a507-a5f2b6668c78-kube-api-access-th556\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.244196 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.474909 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" event={"ID":"7da1add4-521f-473c-8694-ccecf71fce93","Type":"ContainerDied","Data":"613d25f46f67af98ce70f3f5abf8d934501d6069e147f6af856f94fa63cd3fb2"} Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.474932 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.474973 4804 scope.go:117] "RemoveContainer" containerID="a14505e3f8f6847755a9c7ba2c0dbd679286c5c859ffe247846891adcf951765" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.487116 4804 generic.go:334] "Generic (PLEG): container finished" podID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerID="6e63076fc85ddedcafa8e57fc77c689aa3dd692341cb3545cffb8e8c36341088" exitCode=0 Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.487159 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e213f7b0-f3b8-45f6-b965-ed909114500f","Type":"ContainerDied","Data":"6e63076fc85ddedcafa8e57fc77c689aa3dd692341cb3545cffb8e8c36341088"} Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.491063 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.491308 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58c46c5cc8-bpsgv" event={"ID":"96c46652-7506-4118-a507-a5f2b6668c78","Type":"ContainerDied","Data":"372d567d59c59f91cd799085e6762b25e3f3df3f1a5319b2592a5c634b618a46"} Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.498475 4804 scope.go:117] "RemoveContainer" containerID="e95cc363dac842375743e8314956bb8d9f168054cc0e4b1f83fe0a24457640be" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.511949 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-m7bk5"] Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.533063 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-m7bk5"] Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.539852 4804 scope.go:117] "RemoveContainer" containerID="8d2ae7c993c91ad0905eca59b162f3cd21ce7469700e79350f3b9f5b4056fdb6" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.544187 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58c46c5cc8-bpsgv"] Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.550321 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-58c46c5cc8-bpsgv"] Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.567311 4804 scope.go:117] "RemoveContainer" containerID="c025f7a64dc8e48661b9d0d9b892d691c7104b5918c48e37213587b764f98ac2" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.504661 4804 generic.go:334] "Generic (PLEG): container finished" podID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerID="4a4325a7f87ad4e4d3dfec8dfb9484dd1be4c16f01570f585a783d57a6a9b20e" exitCode=0 Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.504991 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e213f7b0-f3b8-45f6-b965-ed909114500f","Type":"ContainerDied","Data":"4a4325a7f87ad4e4d3dfec8dfb9484dd1be4c16f01570f585a783d57a6a9b20e"} Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.647046 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.773816 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-combined-ca-bundle\") pod \"e213f7b0-f3b8-45f6-b965-ed909114500f\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.773900 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e213f7b0-f3b8-45f6-b965-ed909114500f-etc-machine-id\") pod \"e213f7b0-f3b8-45f6-b965-ed909114500f\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.773931 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data\") pod \"e213f7b0-f3b8-45f6-b965-ed909114500f\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.773982 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-scripts\") pod \"e213f7b0-f3b8-45f6-b965-ed909114500f\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.774066 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckd7l\" (UniqueName: \"kubernetes.io/projected/e213f7b0-f3b8-45f6-b965-ed909114500f-kube-api-access-ckd7l\") pod \"e213f7b0-f3b8-45f6-b965-ed909114500f\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.774093 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data-custom\") pod \"e213f7b0-f3b8-45f6-b965-ed909114500f\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.774082 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e213f7b0-f3b8-45f6-b965-ed909114500f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e213f7b0-f3b8-45f6-b965-ed909114500f" (UID: "e213f7b0-f3b8-45f6-b965-ed909114500f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.774426 4804 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e213f7b0-f3b8-45f6-b965-ed909114500f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.781439 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-scripts" (OuterVolumeSpecName: "scripts") pod "e213f7b0-f3b8-45f6-b965-ed909114500f" (UID: "e213f7b0-f3b8-45f6-b965-ed909114500f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.793585 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e213f7b0-f3b8-45f6-b965-ed909114500f-kube-api-access-ckd7l" (OuterVolumeSpecName: "kube-api-access-ckd7l") pod "e213f7b0-f3b8-45f6-b965-ed909114500f" (UID: "e213f7b0-f3b8-45f6-b965-ed909114500f"). InnerVolumeSpecName "kube-api-access-ckd7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.793690 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e213f7b0-f3b8-45f6-b965-ed909114500f" (UID: "e213f7b0-f3b8-45f6-b965-ed909114500f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.823798 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e213f7b0-f3b8-45f6-b965-ed909114500f" (UID: "e213f7b0-f3b8-45f6-b965-ed909114500f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.875578 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.875606 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.875615 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckd7l\" (UniqueName: \"kubernetes.io/projected/e213f7b0-f3b8-45f6-b965-ed909114500f-kube-api-access-ckd7l\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.875626 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.891104 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data" (OuterVolumeSpecName: "config-data") pod "e213f7b0-f3b8-45f6-b965-ed909114500f" (UID: "e213f7b0-f3b8-45f6-b965-ed909114500f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.924843 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7da1add4-521f-473c-8694-ccecf71fce93" path="/var/lib/kubelet/pods/7da1add4-521f-473c-8694-ccecf71fce93/volumes" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.925685 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c46652-7506-4118-a507-a5f2b6668c78" path="/var/lib/kubelet/pods/96c46652-7506-4118-a507-a5f2b6668c78/volumes" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.977640 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.067733 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode213f7b0_f3b8_45f6_b965_ed909114500f.slice/crio-8cc18ae7a5ea3851e2236f7340657d254b4490ff0fc9f65580ef195204b81856\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode213f7b0_f3b8_45f6_b965_ed909114500f.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.516755 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e213f7b0-f3b8-45f6-b965-ed909114500f","Type":"ContainerDied","Data":"8cc18ae7a5ea3851e2236f7340657d254b4490ff0fc9f65580ef195204b81856"} Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.517924 4804 scope.go:117] "RemoveContainer" containerID="6e63076fc85ddedcafa8e57fc77c689aa3dd692341cb3545cffb8e8c36341088" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.517860 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.541922 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.545359 4804 scope.go:117] "RemoveContainer" containerID="4a4325a7f87ad4e4d3dfec8dfb9484dd1be4c16f01570f585a783d57a6a9b20e" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.554205 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.575661 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.576603 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.576808 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.576897 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-httpd" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.576957 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-httpd" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.577035 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da1add4-521f-473c-8694-ccecf71fce93" containerName="init" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.577804 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da1add4-521f-473c-8694-ccecf71fce93" containerName="init" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.577869 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api-log" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.577971 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api-log" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.578037 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="probe" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578099 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="probe" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.578168 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da1add4-521f-473c-8694-ccecf71fce93" containerName="dnsmasq-dns" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578219 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da1add4-521f-473c-8694-ccecf71fce93" containerName="dnsmasq-dns" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.578276 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-api" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578324 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-api" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.578382 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="cinder-scheduler" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578434 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="cinder-scheduler" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578744 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578818 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="cinder-scheduler" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578894 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-api" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578951 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da1add4-521f-473c-8694-ccecf71fce93" containerName="dnsmasq-dns" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.579002 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="probe" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.579068 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-httpd" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.579140 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api-log" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.580791 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.585638 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.590659 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.702197 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4htk\" (UniqueName: \"kubernetes.io/projected/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-kube-api-access-j4htk\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.702281 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.702307 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.702333 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.702369 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-scripts\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.702405 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.803995 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.804072 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.804133 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-scripts\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.804223 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.804266 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.804326 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4htk\" (UniqueName: \"kubernetes.io/projected/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-kube-api-access-j4htk\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.804401 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.808771 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-scripts\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.808852 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.809678 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.810604 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.819903 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4htk\" (UniqueName: \"kubernetes.io/projected/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-kube-api-access-j4htk\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.904702 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.919310 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.330385 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.530074 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e","Type":"ContainerStarted","Data":"4cc14b4a4b262ffd7dca6ce3a4c78be1958d2621d179512804ce0187bc8fd56e"} Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.726982 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.730507 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.735275 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.735441 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-l6dg9" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.735624 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.756952 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.826940 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.827018 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config-secret\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.827103 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-combined-ca-bundle\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.827259 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fk9c\" (UniqueName: \"kubernetes.io/projected/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-kube-api-access-5fk9c\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.934687 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.935228 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config-secret\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.935289 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-combined-ca-bundle\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.935372 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fk9c\" (UniqueName: \"kubernetes.io/projected/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-kube-api-access-5fk9c\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.936424 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.959654 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" path="/var/lib/kubelet/pods/e213f7b0-f3b8-45f6-b965-ed909114500f/volumes" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.970717 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fk9c\" (UniqueName: \"kubernetes.io/projected/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-kube-api-access-5fk9c\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.978674 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config-secret\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.983614 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-combined-ca-bundle\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:35 crc kubenswrapper[4804]: I0128 11:42:35.103949 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 11:42:35 crc kubenswrapper[4804]: I0128 11:42:35.563547 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e","Type":"ContainerStarted","Data":"005c93d53e10abe220c87f4440097a40fcd2ee8a29f58966418aa864a302e6f7"} Jan 28 11:42:35 crc kubenswrapper[4804]: I0128 11:42:35.680112 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 11:42:35 crc kubenswrapper[4804]: I0128 11:42:35.959336 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:35 crc kubenswrapper[4804]: I0128 11:42:35.960376 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:36 crc kubenswrapper[4804]: I0128 11:42:36.575699 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e","Type":"ContainerStarted","Data":"c1345bf2b60adbef9b806636ee3887a5869fd85c14cb9679c394104f26a95a2c"} Jan 28 11:42:36 crc kubenswrapper[4804]: I0128 11:42:36.585731 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"eaba1c3c-49d4-498e-94b8-9c8cbe8660da","Type":"ContainerStarted","Data":"8f4a06f61311546314d53868ba1af1c45d570329c2ec6e58fe2ccf8f3233f81c"} Jan 28 11:42:36 crc kubenswrapper[4804]: I0128 11:42:36.603618 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.603596762 podStartE2EDuration="3.603596762s" podCreationTimestamp="2026-01-28 11:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:36.595966422 +0000 UTC m=+1232.390846406" watchObservedRunningTime="2026-01-28 11:42:36.603596762 +0000 UTC m=+1232.398476736" Jan 28 11:42:38 crc kubenswrapper[4804]: I0128 11:42:38.445419 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 28 11:42:38 crc kubenswrapper[4804]: I0128 11:42:38.905398 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.017896 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.018538 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-central-agent" containerID="cri-o://d53b727b65344f9dcbac7c3c08faebc9e0b148d77c02a88ad5ad99d02b9e26a0" gracePeriod=30 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.019333 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="proxy-httpd" containerID="cri-o://60911acc36e6a6a2e4395ed6f280d91b95f88a16dcea51245f1032e5c6e5c6d8" gracePeriod=30 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.019391 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="sg-core" containerID="cri-o://d9a024557230838bdc17acdb5049c5e8f2ec2a08a39b86af7c3f3e3797853125" gracePeriod=30 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.019431 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-notification-agent" containerID="cri-o://2868fa37a5170a1df9710a1b7d3fc5cd16c8581466e759928b2c3aac1728ebdc" gracePeriod=30 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.027175 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": EOF" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.365901 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-59fb5cbd47-wwqmq"] Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.367348 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.370168 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.370599 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.370939 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.376894 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-run-httpd\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.376940 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6vwc\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-kube-api-access-z6vwc\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.376969 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-log-httpd\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.377009 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-public-tls-certs\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.377039 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-config-data\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.377063 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-combined-ca-bundle\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.377082 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-etc-swift\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.377104 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-internal-tls-certs\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.383992 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-59fb5cbd47-wwqmq"] Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.480747 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-run-httpd\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.480805 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-run-httpd\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.480862 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6vwc\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-kube-api-access-z6vwc\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481121 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-log-httpd\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481293 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-public-tls-certs\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481360 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-config-data\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481422 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-combined-ca-bundle\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481599 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-etc-swift\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481651 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-internal-tls-certs\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481752 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-log-httpd\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.488212 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-internal-tls-certs\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.491465 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-config-data\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.492452 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-etc-swift\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.497861 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6vwc\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-kube-api-access-z6vwc\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.505993 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-public-tls-certs\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.507374 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-combined-ca-bundle\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.641029 4804 generic.go:334] "Generic (PLEG): container finished" podID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerID="60911acc36e6a6a2e4395ed6f280d91b95f88a16dcea51245f1032e5c6e5c6d8" exitCode=0 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.641060 4804 generic.go:334] "Generic (PLEG): container finished" podID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerID="d9a024557230838bdc17acdb5049c5e8f2ec2a08a39b86af7c3f3e3797853125" exitCode=2 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.641070 4804 generic.go:334] "Generic (PLEG): container finished" podID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerID="d53b727b65344f9dcbac7c3c08faebc9e0b148d77c02a88ad5ad99d02b9e26a0" exitCode=0 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.641089 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerDied","Data":"60911acc36e6a6a2e4395ed6f280d91b95f88a16dcea51245f1032e5c6e5c6d8"} Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.641112 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerDied","Data":"d9a024557230838bdc17acdb5049c5e8f2ec2a08a39b86af7c3f3e3797853125"} Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.641121 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerDied","Data":"d53b727b65344f9dcbac7c3c08faebc9e0b148d77c02a88ad5ad99d02b9e26a0"} Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.690997 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:44 crc kubenswrapper[4804]: I0128 11:42:44.206581 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 11:42:46 crc kubenswrapper[4804]: I0128 11:42:46.735463 4804 generic.go:334] "Generic (PLEG): container finished" podID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerID="2868fa37a5170a1df9710a1b7d3fc5cd16c8581466e759928b2c3aac1728ebdc" exitCode=0 Jan 28 11:42:46 crc kubenswrapper[4804]: I0128 11:42:46.735565 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerDied","Data":"2868fa37a5170a1df9710a1b7d3fc5cd16c8581466e759928b2c3aac1728ebdc"} Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.887592 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918343 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-log-httpd\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918620 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88vrp\" (UniqueName: \"kubernetes.io/projected/bf5a35f4-0777-4b67-978a-ce8ab97000d4-kube-api-access-88vrp\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918667 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-combined-ca-bundle\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918696 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-config-data\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918735 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-run-httpd\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918801 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-scripts\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918822 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-sg-core-conf-yaml\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.919141 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.919945 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.925728 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-scripts" (OuterVolumeSpecName: "scripts") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.927867 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5a35f4-0777-4b67-978a-ce8ab97000d4-kube-api-access-88vrp" (OuterVolumeSpecName: "kube-api-access-88vrp") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "kube-api-access-88vrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.940853 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-59fb5cbd47-wwqmq"] Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.973052 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.020143 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.020188 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.020202 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88vrp\" (UniqueName: \"kubernetes.io/projected/bf5a35f4-0777-4b67-978a-ce8ab97000d4-kube-api-access-88vrp\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.020216 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.020230 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.047564 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.052405 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-config-data" (OuterVolumeSpecName: "config-data") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.121402 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.121624 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.765667 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" event={"ID":"fcc0e969-75e0-4441-a805-7845261f1ad5","Type":"ContainerStarted","Data":"b44257342c1561f0cc777c6fe14a814950eb25277bf5e95e9adf49e4a763d6fa"} Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.765988 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" event={"ID":"fcc0e969-75e0-4441-a805-7845261f1ad5","Type":"ContainerStarted","Data":"6e4df9959650dab13d28cc4f6579b5bbae4ec71560a9da89f85150e891a84a60"} Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.766001 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" event={"ID":"fcc0e969-75e0-4441-a805-7845261f1ad5","Type":"ContainerStarted","Data":"6d2eca1ee21c2e58f6c5ebc2fd659f0e3b36f17ff8d88938be99b51b5573272e"} Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.767213 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.767243 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.769006 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"eaba1c3c-49d4-498e-94b8-9c8cbe8660da","Type":"ContainerStarted","Data":"0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67"} Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.772925 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerDied","Data":"8eea3d4a522a6cf3f69074fc2cae25b852205b216d0f0630ee0a40145a648a1d"} Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.772971 4804 scope.go:117] "RemoveContainer" containerID="60911acc36e6a6a2e4395ed6f280d91b95f88a16dcea51245f1032e5c6e5c6d8" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.773095 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.796370 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.796651 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-log" containerID="cri-o://da91222f31b9d2c38c4e6f743c67ffcd04bf815b945ad08e5f7f9977696c9998" gracePeriod=30 Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.796810 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-httpd" containerID="cri-o://eb26cedcbdf60c84a6ee55e21403b89acac09cab6b2379020603bb9402535d6a" gracePeriod=30 Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.809614 4804 scope.go:117] "RemoveContainer" containerID="d9a024557230838bdc17acdb5049c5e8f2ec2a08a39b86af7c3f3e3797853125" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.810005 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" podStartSLOduration=7.809983354 podStartE2EDuration="7.809983354s" podCreationTimestamp="2026-01-28 11:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:48.802101957 +0000 UTC m=+1244.596981941" watchObservedRunningTime="2026-01-28 11:42:48.809983354 +0000 UTC m=+1244.604863338" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.838134 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.855163 4804 scope.go:117] "RemoveContainer" containerID="2868fa37a5170a1df9710a1b7d3fc5cd16c8581466e759928b2c3aac1728ebdc" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.861687 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.884438 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:48 crc kubenswrapper[4804]: E0128 11:42:48.884867 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="proxy-httpd" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.884902 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="proxy-httpd" Jan 28 11:42:48 crc kubenswrapper[4804]: E0128 11:42:48.884925 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-central-agent" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.884933 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-central-agent" Jan 28 11:42:48 crc kubenswrapper[4804]: E0128 11:42:48.884963 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="sg-core" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.884972 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="sg-core" Jan 28 11:42:48 crc kubenswrapper[4804]: E0128 11:42:48.884986 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-notification-agent" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.884993 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-notification-agent" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.885205 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-notification-agent" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.885252 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-central-agent" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.885271 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="sg-core" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.885284 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="proxy-httpd" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.887305 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.892394 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.892520 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.896693 4804 scope.go:117] "RemoveContainer" containerID="d53b727b65344f9dcbac7c3c08faebc9e0b148d77c02a88ad5ad99d02b9e26a0" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.900637 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.764383106 podStartE2EDuration="14.900619233s" podCreationTimestamp="2026-01-28 11:42:34 +0000 UTC" firstStartedPulling="2026-01-28 11:42:35.676733068 +0000 UTC m=+1231.471613052" lastFinishedPulling="2026-01-28 11:42:47.812969195 +0000 UTC m=+1243.607849179" observedRunningTime="2026-01-28 11:42:48.858331068 +0000 UTC m=+1244.653211052" watchObservedRunningTime="2026-01-28 11:42:48.900619233 +0000 UTC m=+1244.695499217" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.911405 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.949457 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" path="/var/lib/kubelet/pods/bf5a35f4-0777-4b67-978a-ce8ab97000d4/volumes" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.050530 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.050588 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-log-httpd\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.050738 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-scripts\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.050827 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhx89\" (UniqueName: \"kubernetes.io/projected/20f84576-8347-4b5a-b084-17f248dba057-kube-api-access-dhx89\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.050897 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.051001 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-config-data\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.051034 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-run-httpd\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152540 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152604 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-config-data\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152627 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-run-httpd\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152703 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152743 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-log-httpd\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152785 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-scripts\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152814 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhx89\" (UniqueName: \"kubernetes.io/projected/20f84576-8347-4b5a-b084-17f248dba057-kube-api-access-dhx89\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.153601 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-run-httpd\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.153690 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-log-httpd\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.160557 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-config-data\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.161629 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.161788 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.167572 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-scripts\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.175052 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhx89\" (UniqueName: \"kubernetes.io/projected/20f84576-8347-4b5a-b084-17f248dba057-kube-api-access-dhx89\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.226032 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.785314 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e31fe0-ad05-40cd-9eee-1597a421a009","Type":"ContainerDied","Data":"da91222f31b9d2c38c4e6f743c67ffcd04bf815b945ad08e5f7f9977696c9998"} Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.785269 4804 generic.go:334] "Generic (PLEG): container finished" podID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerID="da91222f31b9d2c38c4e6f743c67ffcd04bf815b945ad08e5f7f9977696c9998" exitCode=143 Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.805541 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.012361 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-x5xnt"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.013596 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.028513 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-x5xnt"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.113089 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-w8q7w"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.114762 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.125759 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w8q7w"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.176429 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-operator-scripts\") pod \"nova-api-db-create-x5xnt\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.177377 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk5z2\" (UniqueName: \"kubernetes.io/projected/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-kube-api-access-sk5z2\") pod \"nova-api-db-create-x5xnt\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.222975 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0c6f-account-create-update-j6x65"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.225011 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.226722 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.230758 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-j6x65"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.279510 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-operator-scripts\") pod \"nova-api-db-create-x5xnt\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.279556 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a68429-2ef0-45da-8a73-62231d018738-operator-scripts\") pod \"nova-cell0-db-create-w8q7w\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.281058 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-operator-scripts\") pod \"nova-api-db-create-x5xnt\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.281231 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk5z2\" (UniqueName: \"kubernetes.io/projected/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-kube-api-access-sk5z2\") pod \"nova-api-db-create-x5xnt\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.282034 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbntg\" (UniqueName: \"kubernetes.io/projected/47a68429-2ef0-45da-8a73-62231d018738-kube-api-access-dbntg\") pod \"nova-cell0-db-create-w8q7w\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.303337 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk5z2\" (UniqueName: \"kubernetes.io/projected/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-kube-api-access-sk5z2\") pod \"nova-api-db-create-x5xnt\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.319567 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mw42v"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.321211 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.333405 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.338667 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mw42v"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.385300 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbntg\" (UniqueName: \"kubernetes.io/projected/47a68429-2ef0-45da-8a73-62231d018738-kube-api-access-dbntg\") pod \"nova-cell0-db-create-w8q7w\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.385377 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a68429-2ef0-45da-8a73-62231d018738-operator-scripts\") pod \"nova-cell0-db-create-w8q7w\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.385510 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95tm6\" (UniqueName: \"kubernetes.io/projected/2baa2aa0-600d-4728-bb8c-7fee05022658-kube-api-access-95tm6\") pod \"nova-api-0c6f-account-create-update-j6x65\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.385572 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2baa2aa0-600d-4728-bb8c-7fee05022658-operator-scripts\") pod \"nova-api-0c6f-account-create-update-j6x65\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.387162 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a68429-2ef0-45da-8a73-62231d018738-operator-scripts\") pod \"nova-cell0-db-create-w8q7w\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.425689 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbntg\" (UniqueName: \"kubernetes.io/projected/47a68429-2ef0-45da-8a73-62231d018738-kube-api-access-dbntg\") pod \"nova-cell0-db-create-w8q7w\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.433081 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.442744 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2c81-account-create-update-ldfns"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.444224 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.447114 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.479122 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2c81-account-create-update-ldfns"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.504409 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95tm6\" (UniqueName: \"kubernetes.io/projected/2baa2aa0-600d-4728-bb8c-7fee05022658-kube-api-access-95tm6\") pod \"nova-api-0c6f-account-create-update-j6x65\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.504840 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2baa2aa0-600d-4728-bb8c-7fee05022658-operator-scripts\") pod \"nova-api-0c6f-account-create-update-j6x65\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.504899 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgh64\" (UniqueName: \"kubernetes.io/projected/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-kube-api-access-lgh64\") pod \"nova-cell0-2c81-account-create-update-ldfns\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.504954 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b33b00-9642-45dc-8256-5db39ca166f1-operator-scripts\") pod \"nova-cell1-db-create-mw42v\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.504977 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-operator-scripts\") pod \"nova-cell0-2c81-account-create-update-ldfns\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.505277 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j5nj\" (UniqueName: \"kubernetes.io/projected/18b33b00-9642-45dc-8256-5db39ca166f1-kube-api-access-8j5nj\") pod \"nova-cell1-db-create-mw42v\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.509033 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2baa2aa0-600d-4728-bb8c-7fee05022658-operator-scripts\") pod \"nova-api-0c6f-account-create-update-j6x65\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.526596 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95tm6\" (UniqueName: \"kubernetes.io/projected/2baa2aa0-600d-4728-bb8c-7fee05022658-kube-api-access-95tm6\") pod \"nova-api-0c6f-account-create-update-j6x65\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.542482 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.542788 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-log" containerID="cri-o://08457c942fd2dfdc67cc5ef01794dd21f74a9395ce618a5b9717e831bcb6d4af" gracePeriod=30 Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.543928 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-httpd" containerID="cri-o://4052c7d5a660ea4162a986128a1346bc3017e577b5eb525f79ff8ea498d7a5aa" gracePeriod=30 Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.646352 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgh64\" (UniqueName: \"kubernetes.io/projected/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-kube-api-access-lgh64\") pod \"nova-cell0-2c81-account-create-update-ldfns\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.646418 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b33b00-9642-45dc-8256-5db39ca166f1-operator-scripts\") pod \"nova-cell1-db-create-mw42v\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.646437 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-operator-scripts\") pod \"nova-cell0-2c81-account-create-update-ldfns\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.646526 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j5nj\" (UniqueName: \"kubernetes.io/projected/18b33b00-9642-45dc-8256-5db39ca166f1-kube-api-access-8j5nj\") pod \"nova-cell1-db-create-mw42v\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.648134 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b33b00-9642-45dc-8256-5db39ca166f1-operator-scripts\") pod \"nova-cell1-db-create-mw42v\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.653384 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-operator-scripts\") pod \"nova-cell0-2c81-account-create-update-ldfns\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.668669 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7d4e-account-create-update-hrzrw"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.687766 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.688979 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7d4e-account-create-update-hrzrw"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.699364 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgh64\" (UniqueName: \"kubernetes.io/projected/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-kube-api-access-lgh64\") pod \"nova-cell0-2c81-account-create-update-ldfns\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.704318 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j5nj\" (UniqueName: \"kubernetes.io/projected/18b33b00-9642-45dc-8256-5db39ca166f1-kube-api-access-8j5nj\") pod \"nova-cell1-db-create-mw42v\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.715364 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.800357 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.838316 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.838419 4804 generic.go:334] "Generic (PLEG): container finished" podID="268e1424-c22b-4694-a27b-e000fae8fc84" containerID="08457c942fd2dfdc67cc5ef01794dd21f74a9395ce618a5b9717e831bcb6d4af" exitCode=143 Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.838507 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"268e1424-c22b-4694-a27b-e000fae8fc84","Type":"ContainerDied","Data":"08457c942fd2dfdc67cc5ef01794dd21f74a9395ce618a5b9717e831bcb6d4af"} Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.851318 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerStarted","Data":"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d"} Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.851389 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerStarted","Data":"d41d4c7be9d35074e4d66f189d7ceeb0f8e689b845892ee568c44e96679d5f03"} Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.863191 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ffbce9-a3f3-4012-861a-fae498510fde-operator-scripts\") pod \"nova-cell1-7d4e-account-create-update-hrzrw\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.863237 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5xk5\" (UniqueName: \"kubernetes.io/projected/99ffbce9-a3f3-4012-861a-fae498510fde-kube-api-access-p5xk5\") pod \"nova-cell1-7d4e-account-create-update-hrzrw\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.868327 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-x5xnt"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.882473 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.968902 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ffbce9-a3f3-4012-861a-fae498510fde-operator-scripts\") pod \"nova-cell1-7d4e-account-create-update-hrzrw\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.969015 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5xk5\" (UniqueName: \"kubernetes.io/projected/99ffbce9-a3f3-4012-861a-fae498510fde-kube-api-access-p5xk5\") pod \"nova-cell1-7d4e-account-create-update-hrzrw\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.973510 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ffbce9-a3f3-4012-861a-fae498510fde-operator-scripts\") pod \"nova-cell1-7d4e-account-create-update-hrzrw\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.001019 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5xk5\" (UniqueName: \"kubernetes.io/projected/99ffbce9-a3f3-4012-861a-fae498510fde-kube-api-access-p5xk5\") pod \"nova-cell1-7d4e-account-create-update-hrzrw\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.080759 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.227825 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w8q7w"] Jan 28 11:42:51 crc kubenswrapper[4804]: W0128 11:42:51.245267 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47a68429_2ef0_45da_8a73_62231d018738.slice/crio-3303a0648c6e8b6893c683e3553bcfe11a464ab70f43d3dfd2c3af43d8aa3fa3 WatchSource:0}: Error finding container 3303a0648c6e8b6893c683e3553bcfe11a464ab70f43d3dfd2c3af43d8aa3fa3: Status 404 returned error can't find the container with id 3303a0648c6e8b6893c683e3553bcfe11a464ab70f43d3dfd2c3af43d8aa3fa3 Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.390444 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.455340 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.480151 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-j6x65"] Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.554989 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c6795cf88-vn4sv"] Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.563422 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c6795cf88-vn4sv" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-httpd" containerID="cri-o://0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be" gracePeriod=30 Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.563558 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c6795cf88-vn4sv" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-api" containerID="cri-o://a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5" gracePeriod=30 Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.582320 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mw42v"] Jan 28 11:42:51 crc kubenswrapper[4804]: W0128 11:42:51.594910 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18b33b00_9642_45dc_8256_5db39ca166f1.slice/crio-94fad7cc875d46abf78ff18d783640ec9c22adc2ef3de3116b2c9c1993725204 WatchSource:0}: Error finding container 94fad7cc875d46abf78ff18d783640ec9c22adc2ef3de3116b2c9c1993725204: Status 404 returned error can't find the container with id 94fad7cc875d46abf78ff18d783640ec9c22adc2ef3de3116b2c9c1993725204 Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.697313 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2c81-account-create-update-ldfns"] Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.751818 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7d4e-account-create-update-hrzrw"] Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.862169 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" event={"ID":"99ffbce9-a3f3-4012-861a-fae498510fde","Type":"ContainerStarted","Data":"0662751309b7375a007976b8196a7894bedc97a3584200e6148287c968549f62"} Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.867111 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x5xnt" event={"ID":"5e2ade0c-9218-4f08-b78f-b6b6ede461f7","Type":"ContainerStarted","Data":"975bbe68b308267dfc9049aee26ddd5b3539837326d005858cadef68fd9d4a1c"} Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.868197 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w8q7w" event={"ID":"47a68429-2ef0-45da-8a73-62231d018738","Type":"ContainerStarted","Data":"3303a0648c6e8b6893c683e3553bcfe11a464ab70f43d3dfd2c3af43d8aa3fa3"} Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.873209 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mw42v" event={"ID":"18b33b00-9642-45dc-8256-5db39ca166f1","Type":"ContainerStarted","Data":"94fad7cc875d46abf78ff18d783640ec9c22adc2ef3de3116b2c9c1993725204"} Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.878119 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c6f-account-create-update-j6x65" event={"ID":"2baa2aa0-600d-4728-bb8c-7fee05022658","Type":"ContainerStarted","Data":"97f2b7b646319d95351886b9e77211f399a1a8f688c3dd4fd36b85616ee21cb0"} Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.880298 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" event={"ID":"bf79509c-10e0-4ebc-a55d-e46f5497e2fd","Type":"ContainerStarted","Data":"333d31dec4f3b664e0752671fa75e225c60c13136f6e518709c7acd66bbc0431"} Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.980469 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": read tcp 10.217.0.2:33600->10.217.0.151:9292: read: connection reset by peer" Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.980806 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": read tcp 10.217.0.2:33616->10.217.0.151:9292: read: connection reset by peer" Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.891863 4804 generic.go:334] "Generic (PLEG): container finished" podID="47a68429-2ef0-45da-8a73-62231d018738" containerID="d61b26c6574f005cf741e8617cfd877723c9dba4e0c0da9dc9d5ab35b7c99c44" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.892020 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w8q7w" event={"ID":"47a68429-2ef0-45da-8a73-62231d018738","Type":"ContainerDied","Data":"d61b26c6574f005cf741e8617cfd877723c9dba4e0c0da9dc9d5ab35b7c99c44"} Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.894050 4804 generic.go:334] "Generic (PLEG): container finished" podID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerID="0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.894099 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c6795cf88-vn4sv" event={"ID":"17438a34-7ac2-4451-b74e-97ebbf9318f3","Type":"ContainerDied","Data":"0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be"} Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.895081 4804 generic.go:334] "Generic (PLEG): container finished" podID="2baa2aa0-600d-4728-bb8c-7fee05022658" containerID="4396681344b1f4b062c4d3af20aad6ea83e5895641201a1d6581293d78a469d6" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.895121 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c6f-account-create-update-j6x65" event={"ID":"2baa2aa0-600d-4728-bb8c-7fee05022658","Type":"ContainerDied","Data":"4396681344b1f4b062c4d3af20aad6ea83e5895641201a1d6581293d78a469d6"} Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.904324 4804 generic.go:334] "Generic (PLEG): container finished" podID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerID="eb26cedcbdf60c84a6ee55e21403b89acac09cab6b2379020603bb9402535d6a" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.904451 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e31fe0-ad05-40cd-9eee-1597a421a009","Type":"ContainerDied","Data":"eb26cedcbdf60c84a6ee55e21403b89acac09cab6b2379020603bb9402535d6a"} Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.908798 4804 generic.go:334] "Generic (PLEG): container finished" podID="bf79509c-10e0-4ebc-a55d-e46f5497e2fd" containerID="00fa4f179f72ae4ed60b5277bb72d034bf25e0316d4ff2c0b245c99e5bbbb1c0" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.908874 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" event={"ID":"bf79509c-10e0-4ebc-a55d-e46f5497e2fd","Type":"ContainerDied","Data":"00fa4f179f72ae4ed60b5277bb72d034bf25e0316d4ff2c0b245c99e5bbbb1c0"} Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.917258 4804 generic.go:334] "Generic (PLEG): container finished" podID="18b33b00-9642-45dc-8256-5db39ca166f1" containerID="75c0ffcb0c025a38e738831b1e54d6accb5a07b7f29d2b3b100a75e69d401044" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.922199 4804 generic.go:334] "Generic (PLEG): container finished" podID="99ffbce9-a3f3-4012-861a-fae498510fde" containerID="942dab2562186e8c843d08a81baf4b10000e2f951efd28dd679bda2d6239dabc" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.923640 4804 generic.go:334] "Generic (PLEG): container finished" podID="5e2ade0c-9218-4f08-b78f-b6b6ede461f7" containerID="1aa2852183ab3447d372d5d5e67a6b2f61d8ddd3d77cfdf97f897ca4044fdfeb" exitCode=0 Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.054732 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mw42v" event={"ID":"18b33b00-9642-45dc-8256-5db39ca166f1","Type":"ContainerDied","Data":"75c0ffcb0c025a38e738831b1e54d6accb5a07b7f29d2b3b100a75e69d401044"} Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.054782 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" event={"ID":"99ffbce9-a3f3-4012-861a-fae498510fde","Type":"ContainerDied","Data":"942dab2562186e8c843d08a81baf4b10000e2f951efd28dd679bda2d6239dabc"} Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.054794 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x5xnt" event={"ID":"5e2ade0c-9218-4f08-b78f-b6b6ede461f7","Type":"ContainerDied","Data":"1aa2852183ab3447d372d5d5e67a6b2f61d8ddd3d77cfdf97f897ca4044fdfeb"} Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.262760 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.326773 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327473 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-scripts\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327494 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-httpd-run\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327517 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qss5\" (UniqueName: \"kubernetes.io/projected/c6e31fe0-ad05-40cd-9eee-1597a421a009-kube-api-access-9qss5\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327650 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-logs\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327696 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-public-tls-certs\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327720 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-combined-ca-bundle\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327753 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-config-data\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.331141 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.332091 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-logs" (OuterVolumeSpecName: "logs") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.332401 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.332506 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e31fe0-ad05-40cd-9eee-1597a421a009-kube-api-access-9qss5" (OuterVolumeSpecName: "kube-api-access-9qss5") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "kube-api-access-9qss5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.335316 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-scripts" (OuterVolumeSpecName: "scripts") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.372164 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.429456 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.429511 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.429525 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.429538 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.429551 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qss5\" (UniqueName: \"kubernetes.io/projected/c6e31fe0-ad05-40cd-9eee-1597a421a009-kube-api-access-9qss5\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.429564 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.458691 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-config-data" (OuterVolumeSpecName: "config-data") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.469756 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.476359 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.530780 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.530805 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.530816 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.955134 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e31fe0-ad05-40cd-9eee-1597a421a009","Type":"ContainerDied","Data":"77deabf65e4246979130b557c75fc43e2d7873b2dc124e7c3da74d90778d94aa"} Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.955504 4804 scope.go:117] "RemoveContainer" containerID="eb26cedcbdf60c84a6ee55e21403b89acac09cab6b2379020603bb9402535d6a" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.955676 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.998339 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerStarted","Data":"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29"} Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.072962 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.103008 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.129863 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:42:54 crc kubenswrapper[4804]: E0128 11:42:54.130464 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-httpd" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.130486 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-httpd" Jan 28 11:42:54 crc kubenswrapper[4804]: E0128 11:42:54.130515 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-log" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.130523 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-log" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.130753 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-log" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.130777 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-httpd" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.131863 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.137342 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.137424 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.168021 4804 scope.go:117] "RemoveContainer" containerID="da91222f31b9d2c38c4e6f743c67ffcd04bf815b945ad08e5f7f9977696c9998" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.175094 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.266495 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.266872 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.266920 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.266962 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.267019 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.267068 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-logs\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.267091 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshqj\" (UniqueName: \"kubernetes.io/projected/5198da96-d6b6-4b80-bb93-838dff10730e-kube-api-access-qshqj\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.267120 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.368822 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369511 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369541 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369583 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369640 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369689 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-logs\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369710 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qshqj\" (UniqueName: \"kubernetes.io/projected/5198da96-d6b6-4b80-bb93-838dff10730e-kube-api-access-qshqj\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369738 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.370002 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.370122 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.370392 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-logs\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.375859 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.377311 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.379707 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.419333 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.422407 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qshqj\" (UniqueName: \"kubernetes.io/projected/5198da96-d6b6-4b80-bb93-838dff10730e-kube-api-access-qshqj\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.439071 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.456377 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.611683 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.678161 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ffbce9-a3f3-4012-861a-fae498510fde-operator-scripts\") pod \"99ffbce9-a3f3-4012-861a-fae498510fde\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.678460 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5xk5\" (UniqueName: \"kubernetes.io/projected/99ffbce9-a3f3-4012-861a-fae498510fde-kube-api-access-p5xk5\") pod \"99ffbce9-a3f3-4012-861a-fae498510fde\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.679414 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99ffbce9-a3f3-4012-861a-fae498510fde-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99ffbce9-a3f3-4012-861a-fae498510fde" (UID: "99ffbce9-a3f3-4012-861a-fae498510fde"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.683774 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ffbce9-a3f3-4012-861a-fae498510fde-kube-api-access-p5xk5" (OuterVolumeSpecName: "kube-api-access-p5xk5") pod "99ffbce9-a3f3-4012-861a-fae498510fde" (UID: "99ffbce9-a3f3-4012-861a-fae498510fde"). InnerVolumeSpecName "kube-api-access-p5xk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.781004 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5xk5\" (UniqueName: \"kubernetes.io/projected/99ffbce9-a3f3-4012-861a-fae498510fde-kube-api-access-p5xk5\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.781042 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ffbce9-a3f3-4012-861a-fae498510fde-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.801901 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.883444 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95tm6\" (UniqueName: \"kubernetes.io/projected/2baa2aa0-600d-4728-bb8c-7fee05022658-kube-api-access-95tm6\") pod \"2baa2aa0-600d-4728-bb8c-7fee05022658\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.883812 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2baa2aa0-600d-4728-bb8c-7fee05022658-operator-scripts\") pod \"2baa2aa0-600d-4728-bb8c-7fee05022658\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.884572 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2baa2aa0-600d-4728-bb8c-7fee05022658-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2baa2aa0-600d-4728-bb8c-7fee05022658" (UID: "2baa2aa0-600d-4728-bb8c-7fee05022658"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.890296 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2baa2aa0-600d-4728-bb8c-7fee05022658-kube-api-access-95tm6" (OuterVolumeSpecName: "kube-api-access-95tm6") pod "2baa2aa0-600d-4728-bb8c-7fee05022658" (UID: "2baa2aa0-600d-4728-bb8c-7fee05022658"). InnerVolumeSpecName "kube-api-access-95tm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.913520 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.920631 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.949785 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" path="/var/lib/kubelet/pods/c6e31fe0-ad05-40cd-9eee-1597a421a009/volumes" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.951804 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.960498 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.990914 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b33b00-9642-45dc-8256-5db39ca166f1-operator-scripts\") pod \"18b33b00-9642-45dc-8256-5db39ca166f1\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.990985 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-operator-scripts\") pod \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991256 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a68429-2ef0-45da-8a73-62231d018738-operator-scripts\") pod \"47a68429-2ef0-45da-8a73-62231d018738\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991432 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-operator-scripts\") pod \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991485 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j5nj\" (UniqueName: \"kubernetes.io/projected/18b33b00-9642-45dc-8256-5db39ca166f1-kube-api-access-8j5nj\") pod \"18b33b00-9642-45dc-8256-5db39ca166f1\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991548 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgh64\" (UniqueName: \"kubernetes.io/projected/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-kube-api-access-lgh64\") pod \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991620 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbntg\" (UniqueName: \"kubernetes.io/projected/47a68429-2ef0-45da-8a73-62231d018738-kube-api-access-dbntg\") pod \"47a68429-2ef0-45da-8a73-62231d018738\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991647 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk5z2\" (UniqueName: \"kubernetes.io/projected/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-kube-api-access-sk5z2\") pod \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991763 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b33b00-9642-45dc-8256-5db39ca166f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18b33b00-9642-45dc-8256-5db39ca166f1" (UID: "18b33b00-9642-45dc-8256-5db39ca166f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.992126 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e2ade0c-9218-4f08-b78f-b6b6ede461f7" (UID: "5e2ade0c-9218-4f08-b78f-b6b6ede461f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.992433 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b33b00-9642-45dc-8256-5db39ca166f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.992447 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2baa2aa0-600d-4728-bb8c-7fee05022658-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.992458 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95tm6\" (UniqueName: \"kubernetes.io/projected/2baa2aa0-600d-4728-bb8c-7fee05022658-kube-api-access-95tm6\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.992468 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.992820 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf79509c-10e0-4ebc-a55d-e46f5497e2fd" (UID: "bf79509c-10e0-4ebc-a55d-e46f5497e2fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.993227 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a68429-2ef0-45da-8a73-62231d018738-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47a68429-2ef0-45da-8a73-62231d018738" (UID: "47a68429-2ef0-45da-8a73-62231d018738"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:54.999094 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-kube-api-access-sk5z2" (OuterVolumeSpecName: "kube-api-access-sk5z2") pod "5e2ade0c-9218-4f08-b78f-b6b6ede461f7" (UID: "5e2ade0c-9218-4f08-b78f-b6b6ede461f7"). InnerVolumeSpecName "kube-api-access-sk5z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.002032 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a68429-2ef0-45da-8a73-62231d018738-kube-api-access-dbntg" (OuterVolumeSpecName: "kube-api-access-dbntg") pod "47a68429-2ef0-45da-8a73-62231d018738" (UID: "47a68429-2ef0-45da-8a73-62231d018738"). InnerVolumeSpecName "kube-api-access-dbntg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.005565 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-kube-api-access-lgh64" (OuterVolumeSpecName: "kube-api-access-lgh64") pod "bf79509c-10e0-4ebc-a55d-e46f5497e2fd" (UID: "bf79509c-10e0-4ebc-a55d-e46f5497e2fd"). InnerVolumeSpecName "kube-api-access-lgh64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.013955 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b33b00-9642-45dc-8256-5db39ca166f1-kube-api-access-8j5nj" (OuterVolumeSpecName: "kube-api-access-8j5nj") pod "18b33b00-9642-45dc-8256-5db39ca166f1" (UID: "18b33b00-9642-45dc-8256-5db39ca166f1"). InnerVolumeSpecName "kube-api-access-8j5nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.045515 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerStarted","Data":"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.062771 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.062966 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w8q7w" event={"ID":"47a68429-2ef0-45da-8a73-62231d018738","Type":"ContainerDied","Data":"3303a0648c6e8b6893c683e3553bcfe11a464ab70f43d3dfd2c3af43d8aa3fa3"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.063002 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3303a0648c6e8b6893c683e3553bcfe11a464ab70f43d3dfd2c3af43d8aa3fa3" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.090461 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" event={"ID":"bf79509c-10e0-4ebc-a55d-e46f5497e2fd","Type":"ContainerDied","Data":"333d31dec4f3b664e0752671fa75e225c60c13136f6e518709c7acd66bbc0431"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.090508 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="333d31dec4f3b664e0752671fa75e225c60c13136f6e518709c7acd66bbc0431" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.090584 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.104419 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j5nj\" (UniqueName: \"kubernetes.io/projected/18b33b00-9642-45dc-8256-5db39ca166f1-kube-api-access-8j5nj\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.104466 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgh64\" (UniqueName: \"kubernetes.io/projected/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-kube-api-access-lgh64\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.104481 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbntg\" (UniqueName: \"kubernetes.io/projected/47a68429-2ef0-45da-8a73-62231d018738-kube-api-access-dbntg\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.105738 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk5z2\" (UniqueName: \"kubernetes.io/projected/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-kube-api-access-sk5z2\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.105770 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.105785 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a68429-2ef0-45da-8a73-62231d018738-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.112632 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c6f-account-create-update-j6x65" event={"ID":"2baa2aa0-600d-4728-bb8c-7fee05022658","Type":"ContainerDied","Data":"97f2b7b646319d95351886b9e77211f399a1a8f688c3dd4fd36b85616ee21cb0"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.112676 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97f2b7b646319d95351886b9e77211f399a1a8f688c3dd4fd36b85616ee21cb0" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.112747 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.120826 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" event={"ID":"99ffbce9-a3f3-4012-861a-fae498510fde","Type":"ContainerDied","Data":"0662751309b7375a007976b8196a7894bedc97a3584200e6148287c968549f62"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.120869 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0662751309b7375a007976b8196a7894bedc97a3584200e6148287c968549f62" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.120969 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.152578 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.152912 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x5xnt" event={"ID":"5e2ade0c-9218-4f08-b78f-b6b6ede461f7","Type":"ContainerDied","Data":"975bbe68b308267dfc9049aee26ddd5b3539837326d005858cadef68fd9d4a1c"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.152937 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="975bbe68b308267dfc9049aee26ddd5b3539837326d005858cadef68fd9d4a1c" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.176224 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mw42v" event={"ID":"18b33b00-9642-45dc-8256-5db39ca166f1","Type":"ContainerDied","Data":"94fad7cc875d46abf78ff18d783640ec9c22adc2ef3de3116b2c9c1993725204"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.176304 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94fad7cc875d46abf78ff18d783640ec9c22adc2ef3de3116b2c9c1993725204" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.176378 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.195902 4804 generic.go:334] "Generic (PLEG): container finished" podID="268e1424-c22b-4694-a27b-e000fae8fc84" containerID="4052c7d5a660ea4162a986128a1346bc3017e577b5eb525f79ff8ea498d7a5aa" exitCode=0 Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.195960 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"268e1424-c22b-4694-a27b-e000fae8fc84","Type":"ContainerDied","Data":"4052c7d5a660ea4162a986128a1346bc3017e577b5eb525f79ff8ea498d7a5aa"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.222561 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.309754 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-httpd-run\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.309867 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgqsx\" (UniqueName: \"kubernetes.io/projected/268e1424-c22b-4694-a27b-e000fae8fc84-kube-api-access-bgqsx\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.309916 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-config-data\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.310006 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-scripts\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.310103 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-logs\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.310179 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-internal-tls-certs\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.310206 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.310332 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-combined-ca-bundle\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.314171 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.314575 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-logs" (OuterVolumeSpecName: "logs") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.319156 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.320184 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268e1424-c22b-4694-a27b-e000fae8fc84-kube-api-access-bgqsx" (OuterVolumeSpecName: "kube-api-access-bgqsx") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "kube-api-access-bgqsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.348387 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-scripts" (OuterVolumeSpecName: "scripts") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.356255 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.395456 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415492 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415522 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415531 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgqsx\" (UniqueName: \"kubernetes.io/projected/268e1424-c22b-4694-a27b-e000fae8fc84-kube-api-access-bgqsx\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415541 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415549 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415559 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415589 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.420291 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-config-data" (OuterVolumeSpecName: "config-data") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.477047 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.494275 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.517593 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.517628 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: W0128 11:42:55.532051 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5198da96_d6b6_4b80_bb93_838dff10730e.slice/crio-58bcb13d20697d7aea6b95393a84cbc41032eeedc92a545190a9ec6f060f3919 WatchSource:0}: Error finding container 58bcb13d20697d7aea6b95393a84cbc41032eeedc92a545190a9ec6f060f3919: Status 404 returned error can't find the container with id 58bcb13d20697d7aea6b95393a84cbc41032eeedc92a545190a9ec6f060f3919 Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.918213 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.027827 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv5jh\" (UniqueName: \"kubernetes.io/projected/17438a34-7ac2-4451-b74e-97ebbf9318f3-kube-api-access-fv5jh\") pod \"17438a34-7ac2-4451-b74e-97ebbf9318f3\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.028187 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-config\") pod \"17438a34-7ac2-4451-b74e-97ebbf9318f3\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.028222 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-combined-ca-bundle\") pod \"17438a34-7ac2-4451-b74e-97ebbf9318f3\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.028283 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-httpd-config\") pod \"17438a34-7ac2-4451-b74e-97ebbf9318f3\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.028386 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-ovndb-tls-certs\") pod \"17438a34-7ac2-4451-b74e-97ebbf9318f3\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.034861 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17438a34-7ac2-4451-b74e-97ebbf9318f3-kube-api-access-fv5jh" (OuterVolumeSpecName: "kube-api-access-fv5jh") pod "17438a34-7ac2-4451-b74e-97ebbf9318f3" (UID: "17438a34-7ac2-4451-b74e-97ebbf9318f3"). InnerVolumeSpecName "kube-api-access-fv5jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.035627 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "17438a34-7ac2-4451-b74e-97ebbf9318f3" (UID: "17438a34-7ac2-4451-b74e-97ebbf9318f3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.114792 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-config" (OuterVolumeSpecName: "config") pod "17438a34-7ac2-4451-b74e-97ebbf9318f3" (UID: "17438a34-7ac2-4451-b74e-97ebbf9318f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.126645 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17438a34-7ac2-4451-b74e-97ebbf9318f3" (UID: "17438a34-7ac2-4451-b74e-97ebbf9318f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.130380 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv5jh\" (UniqueName: \"kubernetes.io/projected/17438a34-7ac2-4451-b74e-97ebbf9318f3-kube-api-access-fv5jh\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.130416 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.130430 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.130443 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.177866 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "17438a34-7ac2-4451-b74e-97ebbf9318f3" (UID: "17438a34-7ac2-4451-b74e-97ebbf9318f3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.214168 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5198da96-d6b6-4b80-bb93-838dff10730e","Type":"ContainerStarted","Data":"58bcb13d20697d7aea6b95393a84cbc41032eeedc92a545190a9ec6f060f3919"} Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.215684 4804 generic.go:334] "Generic (PLEG): container finished" podID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerID="a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5" exitCode=0 Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.215738 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c6795cf88-vn4sv" event={"ID":"17438a34-7ac2-4451-b74e-97ebbf9318f3","Type":"ContainerDied","Data":"a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5"} Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.215762 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c6795cf88-vn4sv" event={"ID":"17438a34-7ac2-4451-b74e-97ebbf9318f3","Type":"ContainerDied","Data":"13867d6cc190021437c374394c8fea3e953c59a7abb2355a73dc4ecc5ca39b58"} Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.215784 4804 scope.go:117] "RemoveContainer" containerID="0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.216001 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.228218 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"268e1424-c22b-4694-a27b-e000fae8fc84","Type":"ContainerDied","Data":"69238578a45f6424f2874038dcb7535af5f39f1f664e37959ae69aa2b648befa"} Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.228325 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.232390 4804 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.254351 4804 scope.go:117] "RemoveContainer" containerID="a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.265173 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c6795cf88-vn4sv"] Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.280232 4804 scope.go:117] "RemoveContainer" containerID="0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.281714 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be\": container with ID starting with 0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be not found: ID does not exist" containerID="0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.281750 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be"} err="failed to get container status \"0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be\": rpc error: code = NotFound desc = could not find container \"0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be\": container with ID starting with 0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be not found: ID does not exist" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.281775 4804 scope.go:117] "RemoveContainer" containerID="a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.282060 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5\": container with ID starting with a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5 not found: ID does not exist" containerID="a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.282076 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5"} err="failed to get container status \"a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5\": rpc error: code = NotFound desc = could not find container \"a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5\": container with ID starting with a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5 not found: ID does not exist" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.282088 4804 scope.go:117] "RemoveContainer" containerID="4052c7d5a660ea4162a986128a1346bc3017e577b5eb525f79ff8ea498d7a5aa" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.286934 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c6795cf88-vn4sv"] Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.325209 4804 scope.go:117] "RemoveContainer" containerID="08457c942fd2dfdc67cc5ef01794dd21f74a9395ce618a5b9717e831bcb6d4af" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.330875 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.346723 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.358686 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359147 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a68429-2ef0-45da-8a73-62231d018738" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359162 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a68429-2ef0-45da-8a73-62231d018738" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359173 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-httpd" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359182 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-httpd" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359207 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ffbce9-a3f3-4012-861a-fae498510fde" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359215 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ffbce9-a3f3-4012-861a-fae498510fde" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359233 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b33b00-9642-45dc-8256-5db39ca166f1" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359240 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b33b00-9642-45dc-8256-5db39ca166f1" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359251 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf79509c-10e0-4ebc-a55d-e46f5497e2fd" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359258 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf79509c-10e0-4ebc-a55d-e46f5497e2fd" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359270 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-log" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359277 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-log" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359292 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-httpd" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359299 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-httpd" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359312 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2baa2aa0-600d-4728-bb8c-7fee05022658" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359319 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2baa2aa0-600d-4728-bb8c-7fee05022658" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359333 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-api" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359340 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-api" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359359 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2ade0c-9218-4f08-b78f-b6b6ede461f7" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359368 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2ade0c-9218-4f08-b78f-b6b6ede461f7" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359564 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-api" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359579 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-httpd" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359594 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ffbce9-a3f3-4012-861a-fae498510fde" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359606 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-httpd" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359620 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2ade0c-9218-4f08-b78f-b6b6ede461f7" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359639 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="2baa2aa0-600d-4728-bb8c-7fee05022658" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359652 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-log" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359664 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b33b00-9642-45dc-8256-5db39ca166f1" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359674 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a68429-2ef0-45da-8a73-62231d018738" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359686 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf79509c-10e0-4ebc-a55d-e46f5497e2fd" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.360857 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.366753 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.368009 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.373104 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.537732 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.538249 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.538371 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.538530 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.538773 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-logs\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.539107 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbx7g\" (UniqueName: \"kubernetes.io/projected/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-kube-api-access-qbx7g\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.539164 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.539260 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641465 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-logs\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641594 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbx7g\" (UniqueName: \"kubernetes.io/projected/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-kube-api-access-qbx7g\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641622 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641672 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641774 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641798 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641825 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641864 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.642042 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-logs\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.642808 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.642814 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.648674 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.648827 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.653477 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.667410 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.671672 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbx7g\" (UniqueName: \"kubernetes.io/projected/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-kube-api-access-qbx7g\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.687661 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.710357 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.720678 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.812044 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.030062 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" path="/var/lib/kubelet/pods/17438a34-7ac2-4451-b74e-97ebbf9318f3/volumes" Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.031097 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" path="/var/lib/kubelet/pods/268e1424-c22b-4694-a27b-e000fae8fc84/volumes" Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.289309 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5198da96-d6b6-4b80-bb93-838dff10730e","Type":"ContainerStarted","Data":"49f9909506d8a2c0b51ffca97a1e1ce6efc0b0acde0bbf32f3a77e33e0c7d096"} Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.321308 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-central-agent" containerID="cri-o://92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" gracePeriod=30 Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.321598 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerStarted","Data":"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b"} Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.321653 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.321671 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="proxy-httpd" containerID="cri-o://42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" gracePeriod=30 Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.321741 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="sg-core" containerID="cri-o://286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" gracePeriod=30 Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.321828 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-notification-agent" containerID="cri-o://82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" gracePeriod=30 Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.353342 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.976554887 podStartE2EDuration="9.353324556s" podCreationTimestamp="2026-01-28 11:42:48 +0000 UTC" firstStartedPulling="2026-01-28 11:42:49.806692055 +0000 UTC m=+1245.601572039" lastFinishedPulling="2026-01-28 11:42:56.183461724 +0000 UTC m=+1251.978341708" observedRunningTime="2026-01-28 11:42:57.346985167 +0000 UTC m=+1253.141865181" watchObservedRunningTime="2026-01-28 11:42:57.353324556 +0000 UTC m=+1253.148204540" Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.593563 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.265284 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.369830 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f","Type":"ContainerStarted","Data":"53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382329 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-scripts\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382421 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-run-httpd\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382498 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-config-data\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382628 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhx89\" (UniqueName: \"kubernetes.io/projected/20f84576-8347-4b5a-b084-17f248dba057-kube-api-access-dhx89\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382770 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-sg-core-conf-yaml\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382806 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-combined-ca-bundle\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382838 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-log-httpd\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.384051 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.384171 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f","Type":"ContainerStarted","Data":"13f3f152dac9edae9ea4638a3a8d8a972d428663034fabf17665286ff2611f13"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.384968 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389440 4804 generic.go:334] "Generic (PLEG): container finished" podID="20f84576-8347-4b5a-b084-17f248dba057" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" exitCode=0 Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389532 4804 generic.go:334] "Generic (PLEG): container finished" podID="20f84576-8347-4b5a-b084-17f248dba057" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" exitCode=2 Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389551 4804 generic.go:334] "Generic (PLEG): container finished" podID="20f84576-8347-4b5a-b084-17f248dba057" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" exitCode=0 Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389562 4804 generic.go:334] "Generic (PLEG): container finished" podID="20f84576-8347-4b5a-b084-17f248dba057" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" exitCode=0 Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389620 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389624 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerDied","Data":"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389678 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerDied","Data":"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389693 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerDied","Data":"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389703 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerDied","Data":"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389713 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerDied","Data":"d41d4c7be9d35074e4d66f189d7ceeb0f8e689b845892ee568c44e96679d5f03"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389729 4804 scope.go:117] "RemoveContainer" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.401093 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5198da96-d6b6-4b80-bb93-838dff10730e","Type":"ContainerStarted","Data":"ea5cc70522b8b244db30a0a1dd5bc4353ad8899e579dd2e9b1384915ac35e91e"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.403713 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-scripts" (OuterVolumeSpecName: "scripts") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.404983 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f84576-8347-4b5a-b084-17f248dba057-kube-api-access-dhx89" (OuterVolumeSpecName: "kube-api-access-dhx89") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "kube-api-access-dhx89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.444905 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.444871826 podStartE2EDuration="4.444871826s" podCreationTimestamp="2026-01-28 11:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:58.437490764 +0000 UTC m=+1254.232370748" watchObservedRunningTime="2026-01-28 11:42:58.444871826 +0000 UTC m=+1254.239751800" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.487619 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhx89\" (UniqueName: \"kubernetes.io/projected/20f84576-8347-4b5a-b084-17f248dba057-kube-api-access-dhx89\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.487648 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.487657 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.487665 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.503121 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.527966 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.549194 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-config-data" (OuterVolumeSpecName: "config-data") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.589688 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.589729 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.589742 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.621017 4804 scope.go:117] "RemoveContainer" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.640248 4804 scope.go:117] "RemoveContainer" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.663510 4804 scope.go:117] "RemoveContainer" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.693758 4804 scope.go:117] "RemoveContainer" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.694302 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": container with ID starting with 42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b not found: ID does not exist" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.694334 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b"} err="failed to get container status \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": rpc error: code = NotFound desc = could not find container \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": container with ID starting with 42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.694354 4804 scope.go:117] "RemoveContainer" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.694965 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": container with ID starting with 286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b not found: ID does not exist" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.694994 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b"} err="failed to get container status \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": rpc error: code = NotFound desc = could not find container \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": container with ID starting with 286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695011 4804 scope.go:117] "RemoveContainer" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.695237 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": container with ID starting with 82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29 not found: ID does not exist" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695260 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29"} err="failed to get container status \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": rpc error: code = NotFound desc = could not find container \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": container with ID starting with 82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29 not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695273 4804 scope.go:117] "RemoveContainer" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.695537 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": container with ID starting with 92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d not found: ID does not exist" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695578 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d"} err="failed to get container status \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": rpc error: code = NotFound desc = could not find container \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": container with ID starting with 92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695607 4804 scope.go:117] "RemoveContainer" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695853 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b"} err="failed to get container status \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": rpc error: code = NotFound desc = could not find container \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": container with ID starting with 42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695869 4804 scope.go:117] "RemoveContainer" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696118 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b"} err="failed to get container status \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": rpc error: code = NotFound desc = could not find container \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": container with ID starting with 286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696141 4804 scope.go:117] "RemoveContainer" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696292 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29"} err="failed to get container status \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": rpc error: code = NotFound desc = could not find container \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": container with ID starting with 82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29 not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696314 4804 scope.go:117] "RemoveContainer" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696487 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d"} err="failed to get container status \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": rpc error: code = NotFound desc = could not find container \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": container with ID starting with 92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696504 4804 scope.go:117] "RemoveContainer" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696765 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b"} err="failed to get container status \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": rpc error: code = NotFound desc = could not find container \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": container with ID starting with 42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696786 4804 scope.go:117] "RemoveContainer" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.697072 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b"} err="failed to get container status \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": rpc error: code = NotFound desc = could not find container \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": container with ID starting with 286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.697091 4804 scope.go:117] "RemoveContainer" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.697302 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29"} err="failed to get container status \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": rpc error: code = NotFound desc = could not find container \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": container with ID starting with 82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29 not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.697320 4804 scope.go:117] "RemoveContainer" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.697740 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d"} err="failed to get container status \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": rpc error: code = NotFound desc = could not find container \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": container with ID starting with 92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.697762 4804 scope.go:117] "RemoveContainer" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.698421 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b"} err="failed to get container status \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": rpc error: code = NotFound desc = could not find container \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": container with ID starting with 42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.698441 4804 scope.go:117] "RemoveContainer" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.698662 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b"} err="failed to get container status \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": rpc error: code = NotFound desc = could not find container \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": container with ID starting with 286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.698684 4804 scope.go:117] "RemoveContainer" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.698839 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29"} err="failed to get container status \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": rpc error: code = NotFound desc = could not find container \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": container with ID starting with 82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29 not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.698859 4804 scope.go:117] "RemoveContainer" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.699017 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d"} err="failed to get container status \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": rpc error: code = NotFound desc = could not find container \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": container with ID starting with 92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.729876 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.738563 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.746932 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.747340 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-notification-agent" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747359 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-notification-agent" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.747380 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="sg-core" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747386 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="sg-core" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.747396 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="proxy-httpd" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747402 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="proxy-httpd" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.747413 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-central-agent" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747418 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-central-agent" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747576 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-central-agent" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747590 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="sg-core" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747597 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="proxy-httpd" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747609 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-notification-agent" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.749088 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.752478 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.752825 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.772762 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.895346 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-scripts\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.895719 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.895755 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.895861 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-log-httpd\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.895984 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-config-data\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.896152 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l79m\" (UniqueName: \"kubernetes.io/projected/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-kube-api-access-9l79m\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.896235 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-run-httpd\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.924629 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f84576-8347-4b5a-b084-17f248dba057" path="/var/lib/kubelet/pods/20f84576-8347-4b5a-b084-17f248dba057/volumes" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998118 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-log-httpd\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998179 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-config-data\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998239 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l79m\" (UniqueName: \"kubernetes.io/projected/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-kube-api-access-9l79m\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998270 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-run-httpd\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998356 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-scripts\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998376 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998400 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998777 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-log-httpd\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.999036 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-run-httpd\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.002217 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.003937 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-config-data\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.004085 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-scripts\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.018601 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.021292 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l79m\" (UniqueName: \"kubernetes.io/projected/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-kube-api-access-9l79m\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.067428 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.413275 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f","Type":"ContainerStarted","Data":"9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63"} Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.435137 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.435114034 podStartE2EDuration="3.435114034s" podCreationTimestamp="2026-01-28 11:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:59.431029066 +0000 UTC m=+1255.225909060" watchObservedRunningTime="2026-01-28 11:42:59.435114034 +0000 UTC m=+1255.229994018" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.564406 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:59 crc kubenswrapper[4804]: W0128 11:42:59.565531 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82e7f566_1434_46e9_b3d3_fffbdb60a6bf.slice/crio-4c8aa2f7bd63f7b48ef2cb6b2c60c420e1195c5a32dd368ab0d14ffa784ddcab WatchSource:0}: Error finding container 4c8aa2f7bd63f7b48ef2cb6b2c60c420e1195c5a32dd368ab0d14ffa784ddcab: Status 404 returned error can't find the container with id 4c8aa2f7bd63f7b48ef2cb6b2c60c420e1195c5a32dd368ab0d14ffa784ddcab Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.866310 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.425770 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerStarted","Data":"19045e27d44b17964f020e21fca16ea2ab3e1b35f66cb4748132b6158c4a9b1d"} Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.426609 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerStarted","Data":"4c8aa2f7bd63f7b48ef2cb6b2c60c420e1195c5a32dd368ab0d14ffa784ddcab"} Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.696196 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbth2"] Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.697783 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.699596 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.700779 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.701189 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kcpcp" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.712270 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbth2"] Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.741081 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.741411 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-scripts\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.741528 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-config-data\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.741650 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv5np\" (UniqueName: \"kubernetes.io/projected/359ecb47-f044-4273-8589-c0ceedb367b5-kube-api-access-nv5np\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.843563 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.843672 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-scripts\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.843725 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-config-data\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.843782 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv5np\" (UniqueName: \"kubernetes.io/projected/359ecb47-f044-4273-8589-c0ceedb367b5-kube-api-access-nv5np\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.849614 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-scripts\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.850433 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.855599 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-config-data\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.868135 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv5np\" (UniqueName: \"kubernetes.io/projected/359ecb47-f044-4273-8589-c0ceedb367b5-kube-api-access-nv5np\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:01 crc kubenswrapper[4804]: I0128 11:43:01.027728 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:01 crc kubenswrapper[4804]: I0128 11:43:01.503956 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbth2"] Jan 28 11:43:01 crc kubenswrapper[4804]: W0128 11:43:01.505991 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod359ecb47_f044_4273_8589_c0ceedb367b5.slice/crio-daf675501f00de9fd7512d405b357c4cc69caeb3e4856b26378f4e1945fe6e76 WatchSource:0}: Error finding container daf675501f00de9fd7512d405b357c4cc69caeb3e4856b26378f4e1945fe6e76: Status 404 returned error can't find the container with id daf675501f00de9fd7512d405b357c4cc69caeb3e4856b26378f4e1945fe6e76 Jan 28 11:43:02 crc kubenswrapper[4804]: I0128 11:43:02.445688 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerStarted","Data":"d53733edf16324dce7a6f17448b9b02e2419ebeed21a8b78c5a567d2b9c843f2"} Jan 28 11:43:02 crc kubenswrapper[4804]: I0128 11:43:02.448545 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbth2" event={"ID":"359ecb47-f044-4273-8589-c0ceedb367b5","Type":"ContainerStarted","Data":"daf675501f00de9fd7512d405b357c4cc69caeb3e4856b26378f4e1945fe6e76"} Jan 28 11:43:03 crc kubenswrapper[4804]: I0128 11:43:03.466476 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerStarted","Data":"5d1b190c88c3478c96dd782d455a0cd8073b406a803c4a1ba0f960df1c19d2fa"} Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.458195 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.460550 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.484690 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-central-agent" containerID="cri-o://19045e27d44b17964f020e21fca16ea2ab3e1b35f66cb4748132b6158c4a9b1d" gracePeriod=30 Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.484997 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerStarted","Data":"bb181d08668e0c371892ffb81a7cdc4e448883ef0b17e80a4a88e370ec8c31b0"} Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.485042 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.485381 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="proxy-httpd" containerID="cri-o://bb181d08668e0c371892ffb81a7cdc4e448883ef0b17e80a4a88e370ec8c31b0" gracePeriod=30 Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.485442 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="sg-core" containerID="cri-o://5d1b190c88c3478c96dd782d455a0cd8073b406a803c4a1ba0f960df1c19d2fa" gracePeriod=30 Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.485481 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-notification-agent" containerID="cri-o://d53733edf16324dce7a6f17448b9b02e2419ebeed21a8b78c5a567d2b9c843f2" gracePeriod=30 Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.512659 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.035612547 podStartE2EDuration="6.512636588s" podCreationTimestamp="2026-01-28 11:42:58 +0000 UTC" firstStartedPulling="2026-01-28 11:42:59.567353205 +0000 UTC m=+1255.362233189" lastFinishedPulling="2026-01-28 11:43:04.044377246 +0000 UTC m=+1259.839257230" observedRunningTime="2026-01-28 11:43:04.505426813 +0000 UTC m=+1260.300306797" watchObservedRunningTime="2026-01-28 11:43:04.512636588 +0000 UTC m=+1260.307516572" Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.517929 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.525600 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.504271 4804 generic.go:334] "Generic (PLEG): container finished" podID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerID="bb181d08668e0c371892ffb81a7cdc4e448883ef0b17e80a4a88e370ec8c31b0" exitCode=0 Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.504567 4804 generic.go:334] "Generic (PLEG): container finished" podID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerID="5d1b190c88c3478c96dd782d455a0cd8073b406a803c4a1ba0f960df1c19d2fa" exitCode=2 Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.504579 4804 generic.go:334] "Generic (PLEG): container finished" podID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerID="d53733edf16324dce7a6f17448b9b02e2419ebeed21a8b78c5a567d2b9c843f2" exitCode=0 Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.504345 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerDied","Data":"bb181d08668e0c371892ffb81a7cdc4e448883ef0b17e80a4a88e370ec8c31b0"} Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.504671 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerDied","Data":"5d1b190c88c3478c96dd782d455a0cd8073b406a803c4a1ba0f960df1c19d2fa"} Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.504683 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerDied","Data":"d53733edf16324dce7a6f17448b9b02e2419ebeed21a8b78c5a567d2b9c843f2"} Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.505895 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.505928 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 11:43:06 crc kubenswrapper[4804]: I0128 11:43:06.813697 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:06 crc kubenswrapper[4804]: I0128 11:43:06.813755 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:06 crc kubenswrapper[4804]: I0128 11:43:06.844691 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:06 crc kubenswrapper[4804]: I0128 11:43:06.866941 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:07 crc kubenswrapper[4804]: I0128 11:43:07.683807 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 11:43:07 crc kubenswrapper[4804]: I0128 11:43:07.684969 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 11:43:07 crc kubenswrapper[4804]: I0128 11:43:07.697345 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:07 crc kubenswrapper[4804]: I0128 11:43:07.697386 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:09 crc kubenswrapper[4804]: I0128 11:43:09.730743 4804 generic.go:334] "Generic (PLEG): container finished" podID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerID="19045e27d44b17964f020e21fca16ea2ab3e1b35f66cb4748132b6158c4a9b1d" exitCode=0 Jan 28 11:43:09 crc kubenswrapper[4804]: I0128 11:43:09.730833 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerDied","Data":"19045e27d44b17964f020e21fca16ea2ab3e1b35f66cb4748132b6158c4a9b1d"} Jan 28 11:43:09 crc kubenswrapper[4804]: I0128 11:43:09.731130 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:43:09 crc kubenswrapper[4804]: I0128 11:43:09.731139 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.151538 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.183271 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.455120 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522529 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-run-httpd\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522584 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-log-httpd\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522616 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-combined-ca-bundle\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522772 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-config-data\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522854 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-sg-core-conf-yaml\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522897 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-scripts\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522919 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l79m\" (UniqueName: \"kubernetes.io/projected/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-kube-api-access-9l79m\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.524150 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.525331 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.559870 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-scripts" (OuterVolumeSpecName: "scripts") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.570148 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-kube-api-access-9l79m" (OuterVolumeSpecName: "kube-api-access-9l79m") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "kube-api-access-9l79m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.628431 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.628501 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l79m\" (UniqueName: \"kubernetes.io/projected/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-kube-api-access-9l79m\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.628513 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.628522 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.657112 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.733484 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.744931 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.744948 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerDied","Data":"4c8aa2f7bd63f7b48ef2cb6b2c60c420e1195c5a32dd368ab0d14ffa784ddcab"} Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.745028 4804 scope.go:117] "RemoveContainer" containerID="bb181d08668e0c371892ffb81a7cdc4e448883ef0b17e80a4a88e370ec8c31b0" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.753825 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.770225 4804 scope.go:117] "RemoveContainer" containerID="5d1b190c88c3478c96dd782d455a0cd8073b406a803c4a1ba0f960df1c19d2fa" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.773041 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-config-data" (OuterVolumeSpecName: "config-data") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.792093 4804 scope.go:117] "RemoveContainer" containerID="d53733edf16324dce7a6f17448b9b02e2419ebeed21a8b78c5a567d2b9c843f2" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.835074 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.835432 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.890201 4804 scope.go:117] "RemoveContainer" containerID="19045e27d44b17964f020e21fca16ea2ab3e1b35f66cb4748132b6158c4a9b1d" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.082599 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.095446 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118140 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:11 crc kubenswrapper[4804]: E0128 11:43:11.118605 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="proxy-httpd" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118627 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="proxy-httpd" Jan 28 11:43:11 crc kubenswrapper[4804]: E0128 11:43:11.118666 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-notification-agent" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118675 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-notification-agent" Jan 28 11:43:11 crc kubenswrapper[4804]: E0128 11:43:11.118698 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-central-agent" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118706 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-central-agent" Jan 28 11:43:11 crc kubenswrapper[4804]: E0128 11:43:11.118717 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="sg-core" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118724 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="sg-core" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118967 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-notification-agent" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118987 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-central-agent" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118999 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="proxy-httpd" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.119019 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="sg-core" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.120698 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.132558 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.132779 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.148909 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.242590 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.242639 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-config-data\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.242669 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8kx4\" (UniqueName: \"kubernetes.io/projected/efacd8f7-ea6c-47c4-a463-44d8138b8902-kube-api-access-q8kx4\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.242911 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-run-httpd\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.242952 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-log-httpd\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.243006 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.243245 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-scripts\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345341 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-run-httpd\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345384 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-log-httpd\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345414 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345512 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-scripts\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345546 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345576 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-config-data\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345597 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8kx4\" (UniqueName: \"kubernetes.io/projected/efacd8f7-ea6c-47c4-a463-44d8138b8902-kube-api-access-q8kx4\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.346026 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-log-httpd\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.346204 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-run-httpd\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.350552 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.351083 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-config-data\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.351558 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.352497 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-scripts\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.367683 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8kx4\" (UniqueName: \"kubernetes.io/projected/efacd8f7-ea6c-47c4-a463-44d8138b8902-kube-api-access-q8kx4\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.451160 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.768265 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbth2" event={"ID":"359ecb47-f044-4273-8589-c0ceedb367b5","Type":"ContainerStarted","Data":"4826b18cb81abb4e1ff9ad1e5f7d66bf9704f751e4eaecf9575b178485d52c14"} Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.794735 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qbth2" podStartSLOduration=2.893949192 podStartE2EDuration="11.794715745s" podCreationTimestamp="2026-01-28 11:43:00 +0000 UTC" firstStartedPulling="2026-01-28 11:43:01.508593662 +0000 UTC m=+1257.303473646" lastFinishedPulling="2026-01-28 11:43:10.409360215 +0000 UTC m=+1266.204240199" observedRunningTime="2026-01-28 11:43:11.783968429 +0000 UTC m=+1267.578848433" watchObservedRunningTime="2026-01-28 11:43:11.794715745 +0000 UTC m=+1267.589595729" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.979755 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:12 crc kubenswrapper[4804]: I0128 11:43:12.788568 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerStarted","Data":"5dff59600756e03acb4484ec19d69924e551231377219d020efce0a7d85e6522"} Jan 28 11:43:12 crc kubenswrapper[4804]: I0128 11:43:12.928454 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" path="/var/lib/kubelet/pods/82e7f566-1434-46e9-b3d3-fffbdb60a6bf/volumes" Jan 28 11:43:13 crc kubenswrapper[4804]: I0128 11:43:13.789070 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:13 crc kubenswrapper[4804]: I0128 11:43:13.796644 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerStarted","Data":"d9c8c724413ac4da024c5971e62e73d509fc37fedea6226c7dc51b0b5e8d9d15"} Jan 28 11:43:14 crc kubenswrapper[4804]: I0128 11:43:14.806712 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerStarted","Data":"ffb92a8922a160357cb5ca0cff786a7b1a6a637aaf11a872f0bbfb95cde65e15"} Jan 28 11:43:15 crc kubenswrapper[4804]: I0128 11:43:15.817020 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerStarted","Data":"11d4b2ad3b6d452f68fb5932d6ce6d513a682096193428fbe90102adbcab8bc7"} Jan 28 11:43:17 crc kubenswrapper[4804]: I0128 11:43:17.839704 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerStarted","Data":"333d29b1401d9119492f1216480046cd194f566dc44787a465bad0f59fa284c0"} Jan 28 11:43:17 crc kubenswrapper[4804]: I0128 11:43:17.840113 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-central-agent" containerID="cri-o://d9c8c724413ac4da024c5971e62e73d509fc37fedea6226c7dc51b0b5e8d9d15" gracePeriod=30 Jan 28 11:43:17 crc kubenswrapper[4804]: I0128 11:43:17.840417 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:43:17 crc kubenswrapper[4804]: I0128 11:43:17.840660 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="proxy-httpd" containerID="cri-o://333d29b1401d9119492f1216480046cd194f566dc44787a465bad0f59fa284c0" gracePeriod=30 Jan 28 11:43:17 crc kubenswrapper[4804]: I0128 11:43:17.840705 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="sg-core" containerID="cri-o://11d4b2ad3b6d452f68fb5932d6ce6d513a682096193428fbe90102adbcab8bc7" gracePeriod=30 Jan 28 11:43:17 crc kubenswrapper[4804]: I0128 11:43:17.840750 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-notification-agent" containerID="cri-o://ffb92a8922a160357cb5ca0cff786a7b1a6a637aaf11a872f0bbfb95cde65e15" gracePeriod=30 Jan 28 11:43:18 crc kubenswrapper[4804]: I0128 11:43:18.850385 4804 generic.go:334] "Generic (PLEG): container finished" podID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerID="333d29b1401d9119492f1216480046cd194f566dc44787a465bad0f59fa284c0" exitCode=0 Jan 28 11:43:18 crc kubenswrapper[4804]: I0128 11:43:18.851832 4804 generic.go:334] "Generic (PLEG): container finished" podID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerID="11d4b2ad3b6d452f68fb5932d6ce6d513a682096193428fbe90102adbcab8bc7" exitCode=2 Jan 28 11:43:18 crc kubenswrapper[4804]: I0128 11:43:18.851998 4804 generic.go:334] "Generic (PLEG): container finished" podID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerID="ffb92a8922a160357cb5ca0cff786a7b1a6a637aaf11a872f0bbfb95cde65e15" exitCode=0 Jan 28 11:43:18 crc kubenswrapper[4804]: I0128 11:43:18.850564 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerDied","Data":"333d29b1401d9119492f1216480046cd194f566dc44787a465bad0f59fa284c0"} Jan 28 11:43:18 crc kubenswrapper[4804]: I0128 11:43:18.852253 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerDied","Data":"11d4b2ad3b6d452f68fb5932d6ce6d513a682096193428fbe90102adbcab8bc7"} Jan 28 11:43:18 crc kubenswrapper[4804]: I0128 11:43:18.852344 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerDied","Data":"ffb92a8922a160357cb5ca0cff786a7b1a6a637aaf11a872f0bbfb95cde65e15"} Jan 28 11:43:23 crc kubenswrapper[4804]: I0128 11:43:23.904120 4804 generic.go:334] "Generic (PLEG): container finished" podID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerID="d9c8c724413ac4da024c5971e62e73d509fc37fedea6226c7dc51b0b5e8d9d15" exitCode=0 Jan 28 11:43:23 crc kubenswrapper[4804]: I0128 11:43:23.904184 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerDied","Data":"d9c8c724413ac4da024c5971e62e73d509fc37fedea6226c7dc51b0b5e8d9d15"} Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.164265 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.221452 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-config-data\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.221640 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-run-httpd\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222078 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-scripts\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222126 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8kx4\" (UniqueName: \"kubernetes.io/projected/efacd8f7-ea6c-47c4-a463-44d8138b8902-kube-api-access-q8kx4\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222152 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-combined-ca-bundle\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222190 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222215 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-sg-core-conf-yaml\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222457 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-log-httpd\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222951 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.223784 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.224032 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.228460 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efacd8f7-ea6c-47c4-a463-44d8138b8902-kube-api-access-q8kx4" (OuterVolumeSpecName: "kube-api-access-q8kx4") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "kube-api-access-q8kx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.230078 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-scripts" (OuterVolumeSpecName: "scripts") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.258956 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.306744 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.316299 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-config-data" (OuterVolumeSpecName: "config-data") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.325145 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.325168 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.325214 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.325226 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8kx4\" (UniqueName: \"kubernetes.io/projected/efacd8f7-ea6c-47c4-a463-44d8138b8902-kube-api-access-q8kx4\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.325234 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.922376 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.924983 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerDied","Data":"5dff59600756e03acb4484ec19d69924e551231377219d020efce0a7d85e6522"} Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.925028 4804 scope.go:117] "RemoveContainer" containerID="333d29b1401d9119492f1216480046cd194f566dc44787a465bad0f59fa284c0" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.953789 4804 scope.go:117] "RemoveContainer" containerID="11d4b2ad3b6d452f68fb5932d6ce6d513a682096193428fbe90102adbcab8bc7" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.974760 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.984419 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.018416 4804 scope.go:117] "RemoveContainer" containerID="ffb92a8922a160357cb5ca0cff786a7b1a6a637aaf11a872f0bbfb95cde65e15" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.068057 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:25 crc kubenswrapper[4804]: E0128 11:43:25.068539 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="sg-core" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.068565 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="sg-core" Jan 28 11:43:25 crc kubenswrapper[4804]: E0128 11:43:25.068583 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-notification-agent" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.068593 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-notification-agent" Jan 28 11:43:25 crc kubenswrapper[4804]: E0128 11:43:25.069436 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="proxy-httpd" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.069463 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="proxy-httpd" Jan 28 11:43:25 crc kubenswrapper[4804]: E0128 11:43:25.069476 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-central-agent" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.069486 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-central-agent" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.069818 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="proxy-httpd" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.070070 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-notification-agent" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.070089 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-central-agent" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.070101 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="sg-core" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.073129 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.079003 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.079194 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.080639 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.092053 4804 scope.go:117] "RemoveContainer" containerID="d9c8c724413ac4da024c5971e62e73d509fc37fedea6226c7dc51b0b5e8d9d15" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147534 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8vc8\" (UniqueName: \"kubernetes.io/projected/3af7b9f3-ab28-4971-9cde-112e8127e7ed-kube-api-access-p8vc8\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147585 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-log-httpd\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147618 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147634 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-scripts\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147765 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-run-httpd\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147783 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-config-data\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147917 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250240 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250349 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8vc8\" (UniqueName: \"kubernetes.io/projected/3af7b9f3-ab28-4971-9cde-112e8127e7ed-kube-api-access-p8vc8\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250388 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-log-httpd\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250445 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250466 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-scripts\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250565 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-run-httpd\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250609 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-config-data\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.251065 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-log-httpd\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.251178 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-run-httpd\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.255843 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.259508 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-scripts\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.259820 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.260096 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-config-data\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.268827 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8vc8\" (UniqueName: \"kubernetes.io/projected/3af7b9f3-ab28-4971-9cde-112e8127e7ed-kube-api-access-p8vc8\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.399943 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.860408 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.926625 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerStarted","Data":"c8765a95fb8f276f5341ac43164dd55a291ff8252543d039befa66bf61350f2c"} Jan 28 11:43:26 crc kubenswrapper[4804]: I0128 11:43:26.925735 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" path="/var/lib/kubelet/pods/efacd8f7-ea6c-47c4-a463-44d8138b8902/volumes" Jan 28 11:43:26 crc kubenswrapper[4804]: I0128 11:43:26.935664 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerStarted","Data":"f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60"} Jan 28 11:43:27 crc kubenswrapper[4804]: I0128 11:43:27.945999 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerStarted","Data":"2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c"} Jan 28 11:43:28 crc kubenswrapper[4804]: I0128 11:43:28.958823 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerStarted","Data":"9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916"} Jan 28 11:43:30 crc kubenswrapper[4804]: I0128 11:43:30.551055 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:30 crc kubenswrapper[4804]: I0128 11:43:30.979187 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerStarted","Data":"ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4"} Jan 28 11:43:30 crc kubenswrapper[4804]: I0128 11:43:30.979748 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:43:31 crc kubenswrapper[4804]: I0128 11:43:31.002927 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.997829147 podStartE2EDuration="7.002909229s" podCreationTimestamp="2026-01-28 11:43:24 +0000 UTC" firstStartedPulling="2026-01-28 11:43:25.871685613 +0000 UTC m=+1281.666565597" lastFinishedPulling="2026-01-28 11:43:29.876765695 +0000 UTC m=+1285.671645679" observedRunningTime="2026-01-28 11:43:30.99782023 +0000 UTC m=+1286.792700234" watchObservedRunningTime="2026-01-28 11:43:31.002909229 +0000 UTC m=+1286.797789213" Jan 28 11:43:31 crc kubenswrapper[4804]: I0128 11:43:31.987482 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-central-agent" containerID="cri-o://f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60" gracePeriod=30 Jan 28 11:43:31 crc kubenswrapper[4804]: I0128 11:43:31.987535 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-notification-agent" containerID="cri-o://2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c" gracePeriod=30 Jan 28 11:43:31 crc kubenswrapper[4804]: I0128 11:43:31.987556 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="proxy-httpd" containerID="cri-o://ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4" gracePeriod=30 Jan 28 11:43:31 crc kubenswrapper[4804]: I0128 11:43:31.987535 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="sg-core" containerID="cri-o://9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916" gracePeriod=30 Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.006616 4804 generic.go:334] "Generic (PLEG): container finished" podID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerID="ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4" exitCode=0 Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.006925 4804 generic.go:334] "Generic (PLEG): container finished" podID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerID="9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916" exitCode=2 Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.006935 4804 generic.go:334] "Generic (PLEG): container finished" podID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerID="2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c" exitCode=0 Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.006670 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerDied","Data":"ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4"} Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.006972 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerDied","Data":"9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916"} Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.006987 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerDied","Data":"2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c"} Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.829068 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937523 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-config-data\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937594 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-run-httpd\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937615 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-combined-ca-bundle\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937666 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-sg-core-conf-yaml\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937754 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-scripts\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937792 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8vc8\" (UniqueName: \"kubernetes.io/projected/3af7b9f3-ab28-4971-9cde-112e8127e7ed-kube-api-access-p8vc8\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937860 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-log-httpd\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.938022 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.938593 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.939019 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.952779 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-scripts" (OuterVolumeSpecName: "scripts") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.956747 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af7b9f3-ab28-4971-9cde-112e8127e7ed-kube-api-access-p8vc8" (OuterVolumeSpecName: "kube-api-access-p8vc8") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "kube-api-access-p8vc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.965092 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.018177 4804 generic.go:334] "Generic (PLEG): container finished" podID="359ecb47-f044-4273-8589-c0ceedb367b5" containerID="4826b18cb81abb4e1ff9ad1e5f7d66bf9704f751e4eaecf9575b178485d52c14" exitCode=0 Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.018250 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbth2" event={"ID":"359ecb47-f044-4273-8589-c0ceedb367b5","Type":"ContainerDied","Data":"4826b18cb81abb4e1ff9ad1e5f7d66bf9704f751e4eaecf9575b178485d52c14"} Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.029801 4804 generic.go:334] "Generic (PLEG): container finished" podID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerID="f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60" exitCode=0 Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.029850 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerDied","Data":"f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60"} Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.029901 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerDied","Data":"c8765a95fb8f276f5341ac43164dd55a291ff8252543d039befa66bf61350f2c"} Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.029925 4804 scope.go:117] "RemoveContainer" containerID="ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.030094 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.039548 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-config-data" (OuterVolumeSpecName: "config-data") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.039946 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.039978 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.039992 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.040005 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8vc8\" (UniqueName: \"kubernetes.io/projected/3af7b9f3-ab28-4971-9cde-112e8127e7ed-kube-api-access-p8vc8\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.040016 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.044945 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.059106 4804 scope.go:117] "RemoveContainer" containerID="9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.079285 4804 scope.go:117] "RemoveContainer" containerID="2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.097072 4804 scope.go:117] "RemoveContainer" containerID="f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.116516 4804 scope.go:117] "RemoveContainer" containerID="ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.117035 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4\": container with ID starting with ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4 not found: ID does not exist" containerID="ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.117134 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4"} err="failed to get container status \"ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4\": rpc error: code = NotFound desc = could not find container \"ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4\": container with ID starting with ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4 not found: ID does not exist" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.117221 4804 scope.go:117] "RemoveContainer" containerID="9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.117615 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916\": container with ID starting with 9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916 not found: ID does not exist" containerID="9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.117652 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916"} err="failed to get container status \"9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916\": rpc error: code = NotFound desc = could not find container \"9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916\": container with ID starting with 9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916 not found: ID does not exist" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.117677 4804 scope.go:117] "RemoveContainer" containerID="2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.118111 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c\": container with ID starting with 2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c not found: ID does not exist" containerID="2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.118200 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c"} err="failed to get container status \"2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c\": rpc error: code = NotFound desc = could not find container \"2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c\": container with ID starting with 2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c not found: ID does not exist" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.118267 4804 scope.go:117] "RemoveContainer" containerID="f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.118597 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60\": container with ID starting with f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60 not found: ID does not exist" containerID="f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.118648 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60"} err="failed to get container status \"f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60\": rpc error: code = NotFound desc = could not find container \"f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60\": container with ID starting with f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60 not found: ID does not exist" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.141367 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.364368 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.374344 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.387991 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.388523 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="proxy-httpd" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388543 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="proxy-httpd" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.388561 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-notification-agent" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388571 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-notification-agent" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.388591 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="sg-core" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388598 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="sg-core" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.388620 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-central-agent" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388627 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-central-agent" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388822 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="proxy-httpd" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388847 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="sg-core" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388857 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-central-agent" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388869 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-notification-agent" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.390855 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.393760 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.398724 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.426373 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550097 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-scripts\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550161 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmm2q\" (UniqueName: \"kubernetes.io/projected/f3580297-d401-446c-818f-fbb89e50c757-kube-api-access-rmm2q\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550196 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550296 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-log-httpd\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550364 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550398 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-run-httpd\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550499 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-config-data\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.651754 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-config-data\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.651868 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-scripts\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.651916 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmm2q\" (UniqueName: \"kubernetes.io/projected/f3580297-d401-446c-818f-fbb89e50c757-kube-api-access-rmm2q\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.651947 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.651993 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-log-httpd\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.652043 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.652076 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-run-httpd\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.652633 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-run-httpd\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.652751 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-log-httpd\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.655876 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.656000 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-scripts\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.656675 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-config-data\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.659033 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.671954 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmm2q\" (UniqueName: \"kubernetes.io/projected/f3580297-d401-446c-818f-fbb89e50c757-kube-api-access-rmm2q\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.713730 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.928776 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" path="/var/lib/kubelet/pods/3af7b9f3-ab28-4971-9cde-112e8127e7ed/volumes" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.137182 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:35 crc kubenswrapper[4804]: W0128 11:43:35.138024 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3580297_d401_446c_818f_fbb89e50c757.slice/crio-14a4c38d4f2d56c74c58972e9ed2fa41c69a7c52d8e83ec184580c877203f8a9 WatchSource:0}: Error finding container 14a4c38d4f2d56c74c58972e9ed2fa41c69a7c52d8e83ec184580c877203f8a9: Status 404 returned error can't find the container with id 14a4c38d4f2d56c74c58972e9ed2fa41c69a7c52d8e83ec184580c877203f8a9 Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.286271 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.363048 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv5np\" (UniqueName: \"kubernetes.io/projected/359ecb47-f044-4273-8589-c0ceedb367b5-kube-api-access-nv5np\") pod \"359ecb47-f044-4273-8589-c0ceedb367b5\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.363108 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-combined-ca-bundle\") pod \"359ecb47-f044-4273-8589-c0ceedb367b5\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.363150 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-config-data\") pod \"359ecb47-f044-4273-8589-c0ceedb367b5\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.363291 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-scripts\") pod \"359ecb47-f044-4273-8589-c0ceedb367b5\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.368717 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359ecb47-f044-4273-8589-c0ceedb367b5-kube-api-access-nv5np" (OuterVolumeSpecName: "kube-api-access-nv5np") pod "359ecb47-f044-4273-8589-c0ceedb367b5" (UID: "359ecb47-f044-4273-8589-c0ceedb367b5"). InnerVolumeSpecName "kube-api-access-nv5np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.369101 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-scripts" (OuterVolumeSpecName: "scripts") pod "359ecb47-f044-4273-8589-c0ceedb367b5" (UID: "359ecb47-f044-4273-8589-c0ceedb367b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.389143 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "359ecb47-f044-4273-8589-c0ceedb367b5" (UID: "359ecb47-f044-4273-8589-c0ceedb367b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.389834 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-config-data" (OuterVolumeSpecName: "config-data") pod "359ecb47-f044-4273-8589-c0ceedb367b5" (UID: "359ecb47-f044-4273-8589-c0ceedb367b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.465912 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.465940 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv5np\" (UniqueName: \"kubernetes.io/projected/359ecb47-f044-4273-8589-c0ceedb367b5-kube-api-access-nv5np\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.465953 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.465963 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.050768 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbth2" event={"ID":"359ecb47-f044-4273-8589-c0ceedb367b5","Type":"ContainerDied","Data":"daf675501f00de9fd7512d405b357c4cc69caeb3e4856b26378f4e1945fe6e76"} Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.051170 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf675501f00de9fd7512d405b357c4cc69caeb3e4856b26378f4e1945fe6e76" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.050792 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.056164 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerStarted","Data":"10a183c3b80dc4ee328a053d5a9ee94912af618329f720155eaad991e5227301"} Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.056203 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerStarted","Data":"14a4c38d4f2d56c74c58972e9ed2fa41c69a7c52d8e83ec184580c877203f8a9"} Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.190186 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 11:43:36 crc kubenswrapper[4804]: E0128 11:43:36.190552 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359ecb47-f044-4273-8589-c0ceedb367b5" containerName="nova-cell0-conductor-db-sync" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.190565 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="359ecb47-f044-4273-8589-c0ceedb367b5" containerName="nova-cell0-conductor-db-sync" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.190727 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="359ecb47-f044-4273-8589-c0ceedb367b5" containerName="nova-cell0-conductor-db-sync" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.191320 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.193295 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kcpcp" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.193328 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.207224 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.279101 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.279205 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.279286 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74n8\" (UniqueName: \"kubernetes.io/projected/8cb48af9-edd2-404a-9d56-afedbfa79f07-kube-api-access-t74n8\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.380546 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.380689 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.380834 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t74n8\" (UniqueName: \"kubernetes.io/projected/8cb48af9-edd2-404a-9d56-afedbfa79f07-kube-api-access-t74n8\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.389444 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.389658 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.408291 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t74n8\" (UniqueName: \"kubernetes.io/projected/8cb48af9-edd2-404a-9d56-afedbfa79f07-kube-api-access-t74n8\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.505289 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.828402 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 11:43:37 crc kubenswrapper[4804]: I0128 11:43:37.068631 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8cb48af9-edd2-404a-9d56-afedbfa79f07","Type":"ContainerStarted","Data":"c41ec5eb61e29312ebbde6dd9b201b0e68fdaaa8fb1724740ba107ac19157740"} Jan 28 11:43:37 crc kubenswrapper[4804]: I0128 11:43:37.071166 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerStarted","Data":"1d8aa855c628bc141777e622e822953c4716ead40c6674117561e04a96dece56"} Jan 28 11:43:38 crc kubenswrapper[4804]: I0128 11:43:38.086582 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerStarted","Data":"1bfe46f5b1d876490985e1a3f6d0da59f78789571798235a855e1d56bee636d6"} Jan 28 11:43:38 crc kubenswrapper[4804]: I0128 11:43:38.088825 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8cb48af9-edd2-404a-9d56-afedbfa79f07","Type":"ContainerStarted","Data":"fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441"} Jan 28 11:43:38 crc kubenswrapper[4804]: I0128 11:43:38.114690 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.114672563 podStartE2EDuration="2.114672563s" podCreationTimestamp="2026-01-28 11:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:43:38.105470634 +0000 UTC m=+1293.900350618" watchObservedRunningTime="2026-01-28 11:43:38.114672563 +0000 UTC m=+1293.909552547" Jan 28 11:43:39 crc kubenswrapper[4804]: I0128 11:43:39.096189 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:40 crc kubenswrapper[4804]: I0128 11:43:40.108993 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerStarted","Data":"2509bf4b13f32a9d4b208c9c897e5e615b25378e75e4b250d7877c40fc99630f"} Jan 28 11:43:40 crc kubenswrapper[4804]: I0128 11:43:40.109321 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:43:40 crc kubenswrapper[4804]: I0128 11:43:40.128274 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.116048919 podStartE2EDuration="6.128261575s" podCreationTimestamp="2026-01-28 11:43:34 +0000 UTC" firstStartedPulling="2026-01-28 11:43:35.140686647 +0000 UTC m=+1290.935566631" lastFinishedPulling="2026-01-28 11:43:39.152899293 +0000 UTC m=+1294.947779287" observedRunningTime="2026-01-28 11:43:40.127243863 +0000 UTC m=+1295.922123847" watchObservedRunningTime="2026-01-28 11:43:40.128261575 +0000 UTC m=+1295.923141559" Jan 28 11:43:42 crc kubenswrapper[4804]: I0128 11:43:42.583409 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:43:42 crc kubenswrapper[4804]: I0128 11:43:42.583823 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:43:46 crc kubenswrapper[4804]: I0128 11:43:46.529692 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.008466 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-blnpq"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.010244 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.013162 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.013190 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.021609 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-blnpq"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.095007 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-config-data\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.095357 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-scripts\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.095433 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4qw4\" (UniqueName: \"kubernetes.io/projected/f76909b5-2ed7-476f-8f90-d8c9d168af6d-kube-api-access-q4qw4\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.095572 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.197103 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.197416 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-config-data\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.197478 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-scripts\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.197655 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4qw4\" (UniqueName: \"kubernetes.io/projected/f76909b5-2ed7-476f-8f90-d8c9d168af6d-kube-api-access-q4qw4\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.207122 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-scripts\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.211262 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.232722 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-config-data\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.236842 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4qw4\" (UniqueName: \"kubernetes.io/projected/f76909b5-2ed7-476f-8f90-d8c9d168af6d-kube-api-access-q4qw4\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.252086 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.253611 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.256347 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.275330 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.281692 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.284904 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.298906 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-config-data\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.298971 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d67f7045-5136-4adb-af27-14ff32c4c2ea-logs\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.303011 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7s4d\" (UniqueName: \"kubernetes.io/projected/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-kube-api-access-q7s4d\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.303142 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.303231 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56krv\" (UniqueName: \"kubernetes.io/projected/d67f7045-5136-4adb-af27-14ff32c4c2ea-kube-api-access-56krv\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.303256 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-logs\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.303329 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.303448 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-config-data\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.304165 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.323157 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.330980 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.354717 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.356864 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.359305 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.383270 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406120 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406244 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-config-data\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406304 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406321 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-config-data\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406372 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h52j4\" (UniqueName: \"kubernetes.io/projected/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-kube-api-access-h52j4\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406394 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d67f7045-5136-4adb-af27-14ff32c4c2ea-logs\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406458 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7s4d\" (UniqueName: \"kubernetes.io/projected/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-kube-api-access-q7s4d\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406492 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406520 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-config-data\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406553 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56krv\" (UniqueName: \"kubernetes.io/projected/d67f7045-5136-4adb-af27-14ff32c4c2ea-kube-api-access-56krv\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406571 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-logs\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.407102 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-logs\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.412797 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d67f7045-5136-4adb-af27-14ff32c4c2ea-logs\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.413182 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.424036 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.428638 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-config-data\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.430085 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-config-data\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.448950 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.450226 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.454209 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56krv\" (UniqueName: \"kubernetes.io/projected/d67f7045-5136-4adb-af27-14ff32c4c2ea-kube-api-access-56krv\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.457269 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.462739 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7s4d\" (UniqueName: \"kubernetes.io/projected/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-kube-api-access-q7s4d\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.475004 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.501972 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-9f892"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.508934 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-config-data\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.509075 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.509113 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.509151 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.509175 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h52j4\" (UniqueName: \"kubernetes.io/projected/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-kube-api-access-h52j4\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.509217 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2dpn\" (UniqueName: \"kubernetes.io/projected/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-kube-api-access-z2dpn\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.512150 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.514208 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-config-data\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.520385 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.524289 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-9f892"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.528789 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.541668 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h52j4\" (UniqueName: \"kubernetes.io/projected/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-kube-api-access-h52j4\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.546144 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.560005 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.610853 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612207 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-svc\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612303 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612485 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612567 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2dpn\" (UniqueName: \"kubernetes.io/projected/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-kube-api-access-z2dpn\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612737 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-config\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612787 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7t2v\" (UniqueName: \"kubernetes.io/projected/913fe193-1d5f-4561-9618-fde749a25a1d-kube-api-access-l7t2v\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612832 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612912 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.619017 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.631703 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.633696 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2dpn\" (UniqueName: \"kubernetes.io/projected/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-kube-api-access-z2dpn\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.714950 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-config\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.715008 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7t2v\" (UniqueName: \"kubernetes.io/projected/913fe193-1d5f-4561-9618-fde749a25a1d-kube-api-access-l7t2v\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.715049 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.715105 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.715430 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-svc\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.715488 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.716290 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-config\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.716442 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.717480 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-svc\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.718294 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.720692 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.737664 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7t2v\" (UniqueName: \"kubernetes.io/projected/913fe193-1d5f-4561-9618-fde749a25a1d-kube-api-access-l7t2v\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.872774 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.889303 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.037098 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-blnpq"] Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.138551 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:43:48 crc kubenswrapper[4804]: W0128 11:43:48.139261 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd67f7045_5136_4adb_af27_14ff32c4c2ea.slice/crio-7709d8911001699b6303fb7289a9df495a6178d282befad8adee9c09bde9fc52 WatchSource:0}: Error finding container 7709d8911001699b6303fb7289a9df495a6178d282befad8adee9c09bde9fc52: Status 404 returned error can't find the container with id 7709d8911001699b6303fb7289a9df495a6178d282befad8adee9c09bde9fc52 Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.203965 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-blnpq" event={"ID":"f76909b5-2ed7-476f-8f90-d8c9d168af6d","Type":"ContainerStarted","Data":"9caada4f3046311c64b68a2a14859289d74554adf28625be0bc6ad2f9d554fdc"} Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.207554 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d67f7045-5136-4adb-af27-14ff32c4c2ea","Type":"ContainerStarted","Data":"7709d8911001699b6303fb7289a9df495a6178d282befad8adee9c09bde9fc52"} Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.222340 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:48 crc kubenswrapper[4804]: W0128 11:43:48.240036 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc02dc60a_4990_4b17_8ebd_7b0b58ac8d90.slice/crio-89fefe8c0681022add25947cf943e711345231c628a2423e37a026f425da2ce0 WatchSource:0}: Error finding container 89fefe8c0681022add25947cf943e711345231c628a2423e37a026f425da2ce0: Status 404 returned error can't find the container with id 89fefe8c0681022add25947cf943e711345231c628a2423e37a026f425da2ce0 Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.325063 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t5xcd"] Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.326531 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: W0128 11:43:48.330816 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4afa58a1_e3ce_42e1_a0d7_bf0c57459ed2.slice/crio-523556f0a73e01c714755560786dfc8607783ee1ae02626c93c086b531e71383 WatchSource:0}: Error finding container 523556f0a73e01c714755560786dfc8607783ee1ae02626c93c086b531e71383: Status 404 returned error can't find the container with id 523556f0a73e01c714755560786dfc8607783ee1ae02626c93c086b531e71383 Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.331597 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.331750 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.348514 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t5xcd"] Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.362354 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.429210 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-config-data\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.429434 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.429478 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-scripts\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.429679 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t5jd\" (UniqueName: \"kubernetes.io/projected/f35650b1-56b4-49fb-9ecc-9aa90a1386db-kube-api-access-7t5jd\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.491301 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.531468 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.531561 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-scripts\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.531648 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t5jd\" (UniqueName: \"kubernetes.io/projected/f35650b1-56b4-49fb-9ecc-9aa90a1386db-kube-api-access-7t5jd\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.531697 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-config-data\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.540193 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.540683 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-config-data\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.545517 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-scripts\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.550665 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t5jd\" (UniqueName: \"kubernetes.io/projected/f35650b1-56b4-49fb-9ecc-9aa90a1386db-kube-api-access-7t5jd\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: W0128 11:43:48.598513 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod913fe193_1d5f_4561_9618_fde749a25a1d.slice/crio-2cfff1780e426c4b862ba06c4d5d217da66ed95a6d5a815238b1e3776a2afeb4 WatchSource:0}: Error finding container 2cfff1780e426c4b862ba06c4d5d217da66ed95a6d5a815238b1e3776a2afeb4: Status 404 returned error can't find the container with id 2cfff1780e426c4b862ba06c4d5d217da66ed95a6d5a815238b1e3776a2afeb4 Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.599495 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-9f892"] Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.698546 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.202425 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t5xcd"] Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.219695 4804 generic.go:334] "Generic (PLEG): container finished" podID="913fe193-1d5f-4561-9618-fde749a25a1d" containerID="5b66ffd3825053b82c96be643a5c4b3e14230fd04a94235eb6e84c88e45a3ddc" exitCode=0 Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.219861 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-9f892" event={"ID":"913fe193-1d5f-4561-9618-fde749a25a1d","Type":"ContainerDied","Data":"5b66ffd3825053b82c96be643a5c4b3e14230fd04a94235eb6e84c88e45a3ddc"} Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.220015 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-9f892" event={"ID":"913fe193-1d5f-4561-9618-fde749a25a1d","Type":"ContainerStarted","Data":"2cfff1780e426c4b862ba06c4d5d217da66ed95a6d5a815238b1e3776a2afeb4"} Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.223433 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"84b18213-5ffe-40a4-b2f7-a8bb117d9a79","Type":"ContainerStarted","Data":"30cd19d729fe0a8f365f4576d67a9396141f36b3555091744e62104b74b1d641"} Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.229671 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2","Type":"ContainerStarted","Data":"523556f0a73e01c714755560786dfc8607783ee1ae02626c93c086b531e71383"} Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.232000 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-blnpq" event={"ID":"f76909b5-2ed7-476f-8f90-d8c9d168af6d","Type":"ContainerStarted","Data":"a2eabfea7974e19dcb056faf4aba79a46119c1df2377b8eb64616fb881ba0268"} Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.234265 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90","Type":"ContainerStarted","Data":"89fefe8c0681022add25947cf943e711345231c628a2423e37a026f425da2ce0"} Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.266919 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-blnpq" podStartSLOduration=3.266897485 podStartE2EDuration="3.266897485s" podCreationTimestamp="2026-01-28 11:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:43:49.255792068 +0000 UTC m=+1305.050672062" watchObservedRunningTime="2026-01-28 11:43:49.266897485 +0000 UTC m=+1305.061777469" Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.246864 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-9f892" event={"ID":"913fe193-1d5f-4561-9618-fde749a25a1d","Type":"ContainerStarted","Data":"a420e8ae687c2c995a304fcdb8d308a9f54e0ef0b5158ed4df80a9da46192b9f"} Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.247666 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.252023 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" event={"ID":"f35650b1-56b4-49fb-9ecc-9aa90a1386db","Type":"ContainerStarted","Data":"14d679b7ac81e4e13ea78d091c6bcc493eebbfb6bcb668dffab054c4661eb685"} Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.252055 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" event={"ID":"f35650b1-56b4-49fb-9ecc-9aa90a1386db","Type":"ContainerStarted","Data":"4b89b20bfbaf4f095edc6956fb3ba47586bb3c4fc669a11ca40dee8d933950d5"} Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.278210 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865f5d856f-9f892" podStartSLOduration=3.278191113 podStartE2EDuration="3.278191113s" podCreationTimestamp="2026-01-28 11:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:43:50.27268131 +0000 UTC m=+1306.067561294" watchObservedRunningTime="2026-01-28 11:43:50.278191113 +0000 UTC m=+1306.073071097" Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.295105 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" podStartSLOduration=2.295084872 podStartE2EDuration="2.295084872s" podCreationTimestamp="2026-01-28 11:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:43:50.290182338 +0000 UTC m=+1306.085062322" watchObservedRunningTime="2026-01-28 11:43:50.295084872 +0000 UTC m=+1306.089964856" Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.846540 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.895859 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.281113 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"84b18213-5ffe-40a4-b2f7-a8bb117d9a79","Type":"ContainerStarted","Data":"a3ec7c22b5141bccbf8d04497f2060dbc86f6d11337e8f669677bbb61ab18959"} Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.281278 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="84b18213-5ffe-40a4-b2f7-a8bb117d9a79" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a3ec7c22b5141bccbf8d04497f2060dbc86f6d11337e8f669677bbb61ab18959" gracePeriod=30 Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.282841 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2","Type":"ContainerStarted","Data":"caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d"} Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.286022 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-log" containerID="cri-o://0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f" gracePeriod=30 Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.286232 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90","Type":"ContainerStarted","Data":"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e"} Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.286279 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90","Type":"ContainerStarted","Data":"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f"} Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.286343 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-metadata" containerID="cri-o://ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e" gracePeriod=30 Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.292054 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d67f7045-5136-4adb-af27-14ff32c4c2ea","Type":"ContainerStarted","Data":"40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0"} Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.292112 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d67f7045-5136-4adb-af27-14ff32c4c2ea","Type":"ContainerStarted","Data":"e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587"} Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.319522 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.622947546 podStartE2EDuration="5.319495113s" podCreationTimestamp="2026-01-28 11:43:47 +0000 UTC" firstStartedPulling="2026-01-28 11:43:48.493709395 +0000 UTC m=+1304.288589379" lastFinishedPulling="2026-01-28 11:43:51.190256962 +0000 UTC m=+1306.985136946" observedRunningTime="2026-01-28 11:43:52.305759223 +0000 UTC m=+1308.100639197" watchObservedRunningTime="2026-01-28 11:43:52.319495113 +0000 UTC m=+1308.114375207" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.333241 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.482159506 podStartE2EDuration="5.333217122s" podCreationTimestamp="2026-01-28 11:43:47 +0000 UTC" firstStartedPulling="2026-01-28 11:43:48.333527449 +0000 UTC m=+1304.128407473" lastFinishedPulling="2026-01-28 11:43:51.184585105 +0000 UTC m=+1306.979465089" observedRunningTime="2026-01-28 11:43:52.325591134 +0000 UTC m=+1308.120471118" watchObservedRunningTime="2026-01-28 11:43:52.333217122 +0000 UTC m=+1308.128097096" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.354615 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.425206692 podStartE2EDuration="5.354579231s" podCreationTimestamp="2026-01-28 11:43:47 +0000 UTC" firstStartedPulling="2026-01-28 11:43:48.247640759 +0000 UTC m=+1304.042520743" lastFinishedPulling="2026-01-28 11:43:51.177013298 +0000 UTC m=+1306.971893282" observedRunningTime="2026-01-28 11:43:52.346787857 +0000 UTC m=+1308.141667851" watchObservedRunningTime="2026-01-28 11:43:52.354579231 +0000 UTC m=+1308.149459215" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.373572 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.377630253 podStartE2EDuration="5.373551246s" podCreationTimestamp="2026-01-28 11:43:47 +0000 UTC" firstStartedPulling="2026-01-28 11:43:48.189051644 +0000 UTC m=+1303.983931618" lastFinishedPulling="2026-01-28 11:43:51.184972627 +0000 UTC m=+1306.979852611" observedRunningTime="2026-01-28 11:43:52.368713434 +0000 UTC m=+1308.163593428" watchObservedRunningTime="2026-01-28 11:43:52.373551246 +0000 UTC m=+1308.168431230" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.546946 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.547242 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.561248 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.873250 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.929784 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.956093 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-config-data\") pod \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.957206 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7s4d\" (UniqueName: \"kubernetes.io/projected/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-kube-api-access-q7s4d\") pod \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.957267 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-logs\") pod \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.957447 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-combined-ca-bundle\") pod \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.957722 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-logs" (OuterVolumeSpecName: "logs") pod "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" (UID: "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.958333 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.962402 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-kube-api-access-q7s4d" (OuterVolumeSpecName: "kube-api-access-q7s4d") pod "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" (UID: "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90"). InnerVolumeSpecName "kube-api-access-q7s4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.985583 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-config-data" (OuterVolumeSpecName: "config-data") pod "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" (UID: "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.002230 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" (UID: "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.060402 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.060432 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7s4d\" (UniqueName: \"kubernetes.io/projected/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-kube-api-access-q7s4d\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.060463 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302292 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302313 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90","Type":"ContainerDied","Data":"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e"} Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302379 4804 scope.go:117] "RemoveContainer" containerID="ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302251 4804 generic.go:334] "Generic (PLEG): container finished" podID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerID="ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e" exitCode=0 Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302423 4804 generic.go:334] "Generic (PLEG): container finished" podID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerID="0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f" exitCode=143 Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302530 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90","Type":"ContainerDied","Data":"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f"} Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302579 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90","Type":"ContainerDied","Data":"89fefe8c0681022add25947cf943e711345231c628a2423e37a026f425da2ce0"} Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.334383 4804 scope.go:117] "RemoveContainer" containerID="0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.349277 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.363834 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.381112 4804 scope.go:117] "RemoveContainer" containerID="ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e" Jan 28 11:43:53 crc kubenswrapper[4804]: E0128 11:43:53.381626 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e\": container with ID starting with ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e not found: ID does not exist" containerID="ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.381669 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e"} err="failed to get container status \"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e\": rpc error: code = NotFound desc = could not find container \"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e\": container with ID starting with ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e not found: ID does not exist" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.381703 4804 scope.go:117] "RemoveContainer" containerID="0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.382337 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:53 crc kubenswrapper[4804]: E0128 11:43:53.382761 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-log" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.382780 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-log" Jan 28 11:43:53 crc kubenswrapper[4804]: E0128 11:43:53.382800 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-metadata" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.382807 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-metadata" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.383035 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-log" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.383051 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-metadata" Jan 28 11:43:53 crc kubenswrapper[4804]: E0128 11:43:53.387525 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f\": container with ID starting with 0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f not found: ID does not exist" containerID="0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.387567 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f"} err="failed to get container status \"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f\": rpc error: code = NotFound desc = could not find container \"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f\": container with ID starting with 0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f not found: ID does not exist" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.387594 4804 scope.go:117] "RemoveContainer" containerID="ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.389444 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e"} err="failed to get container status \"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e\": rpc error: code = NotFound desc = could not find container \"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e\": container with ID starting with ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e not found: ID does not exist" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.389473 4804 scope.go:117] "RemoveContainer" containerID="0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.392611 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f"} err="failed to get container status \"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f\": rpc error: code = NotFound desc = could not find container \"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f\": container with ID starting with 0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f not found: ID does not exist" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.394281 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.395780 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.401838 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.402019 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.569555 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.570331 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-config-data\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.570389 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27bc011-ed63-4b36-ae46-bba181d0989b-logs\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.570469 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcwjk\" (UniqueName: \"kubernetes.io/projected/b27bc011-ed63-4b36-ae46-bba181d0989b-kube-api-access-bcwjk\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.570562 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.672751 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-config-data\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.672808 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27bc011-ed63-4b36-ae46-bba181d0989b-logs\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.672856 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcwjk\" (UniqueName: \"kubernetes.io/projected/b27bc011-ed63-4b36-ae46-bba181d0989b-kube-api-access-bcwjk\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.672915 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.673028 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.673475 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27bc011-ed63-4b36-ae46-bba181d0989b-logs\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.677940 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-config-data\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.678505 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.685248 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.688799 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcwjk\" (UniqueName: \"kubernetes.io/projected/b27bc011-ed63-4b36-ae46-bba181d0989b-kube-api-access-bcwjk\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.723129 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:43:54 crc kubenswrapper[4804]: I0128 11:43:54.172271 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:54 crc kubenswrapper[4804]: W0128 11:43:54.176819 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb27bc011_ed63_4b36_ae46_bba181d0989b.slice/crio-7ad87104ee484966dd41a5010615ced3c765db64d29ab101c21b63bf343c102b WatchSource:0}: Error finding container 7ad87104ee484966dd41a5010615ced3c765db64d29ab101c21b63bf343c102b: Status 404 returned error can't find the container with id 7ad87104ee484966dd41a5010615ced3c765db64d29ab101c21b63bf343c102b Jan 28 11:43:54 crc kubenswrapper[4804]: I0128 11:43:54.322161 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b27bc011-ed63-4b36-ae46-bba181d0989b","Type":"ContainerStarted","Data":"7ad87104ee484966dd41a5010615ced3c765db64d29ab101c21b63bf343c102b"} Jan 28 11:43:54 crc kubenswrapper[4804]: I0128 11:43:54.940375 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" path="/var/lib/kubelet/pods/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90/volumes" Jan 28 11:43:55 crc kubenswrapper[4804]: I0128 11:43:55.334997 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b27bc011-ed63-4b36-ae46-bba181d0989b","Type":"ContainerStarted","Data":"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620"} Jan 28 11:43:55 crc kubenswrapper[4804]: I0128 11:43:55.335345 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b27bc011-ed63-4b36-ae46-bba181d0989b","Type":"ContainerStarted","Data":"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca"} Jan 28 11:43:55 crc kubenswrapper[4804]: I0128 11:43:55.368840 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.368822098 podStartE2EDuration="2.368822098s" podCreationTimestamp="2026-01-28 11:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:43:55.3679505 +0000 UTC m=+1311.162830484" watchObservedRunningTime="2026-01-28 11:43:55.368822098 +0000 UTC m=+1311.163702102" Jan 28 11:43:56 crc kubenswrapper[4804]: I0128 11:43:56.343923 4804 generic.go:334] "Generic (PLEG): container finished" podID="f35650b1-56b4-49fb-9ecc-9aa90a1386db" containerID="14d679b7ac81e4e13ea78d091c6bcc493eebbfb6bcb668dffab054c4661eb685" exitCode=0 Jan 28 11:43:56 crc kubenswrapper[4804]: I0128 11:43:56.343997 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" event={"ID":"f35650b1-56b4-49fb-9ecc-9aa90a1386db","Type":"ContainerDied","Data":"14d679b7ac81e4e13ea78d091c6bcc493eebbfb6bcb668dffab054c4661eb685"} Jan 28 11:43:56 crc kubenswrapper[4804]: I0128 11:43:56.345361 4804 generic.go:334] "Generic (PLEG): container finished" podID="f76909b5-2ed7-476f-8f90-d8c9d168af6d" containerID="a2eabfea7974e19dcb056faf4aba79a46119c1df2377b8eb64616fb881ba0268" exitCode=0 Jan 28 11:43:56 crc kubenswrapper[4804]: I0128 11:43:56.345433 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-blnpq" event={"ID":"f76909b5-2ed7-476f-8f90-d8c9d168af6d","Type":"ContainerDied","Data":"a2eabfea7974e19dcb056faf4aba79a46119c1df2377b8eb64616fb881ba0268"} Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.529274 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.529676 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.563244 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.595489 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.800117 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.805179 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.886966 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-combined-ca-bundle\") pod \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887102 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4qw4\" (UniqueName: \"kubernetes.io/projected/f76909b5-2ed7-476f-8f90-d8c9d168af6d-kube-api-access-q4qw4\") pod \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887153 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t5jd\" (UniqueName: \"kubernetes.io/projected/f35650b1-56b4-49fb-9ecc-9aa90a1386db-kube-api-access-7t5jd\") pod \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887245 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-config-data\") pod \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887301 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-combined-ca-bundle\") pod \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887362 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-scripts\") pod \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887395 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-config-data\") pod \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887422 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-scripts\") pod \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.891049 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.894330 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76909b5-2ed7-476f-8f90-d8c9d168af6d-kube-api-access-q4qw4" (OuterVolumeSpecName: "kube-api-access-q4qw4") pod "f76909b5-2ed7-476f-8f90-d8c9d168af6d" (UID: "f76909b5-2ed7-476f-8f90-d8c9d168af6d"). InnerVolumeSpecName "kube-api-access-q4qw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.895100 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35650b1-56b4-49fb-9ecc-9aa90a1386db-kube-api-access-7t5jd" (OuterVolumeSpecName: "kube-api-access-7t5jd") pod "f35650b1-56b4-49fb-9ecc-9aa90a1386db" (UID: "f35650b1-56b4-49fb-9ecc-9aa90a1386db"). InnerVolumeSpecName "kube-api-access-7t5jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.895986 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-scripts" (OuterVolumeSpecName: "scripts") pod "f76909b5-2ed7-476f-8f90-d8c9d168af6d" (UID: "f76909b5-2ed7-476f-8f90-d8c9d168af6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.897028 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-scripts" (OuterVolumeSpecName: "scripts") pod "f35650b1-56b4-49fb-9ecc-9aa90a1386db" (UID: "f35650b1-56b4-49fb-9ecc-9aa90a1386db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.943085 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-config-data" (OuterVolumeSpecName: "config-data") pod "f76909b5-2ed7-476f-8f90-d8c9d168af6d" (UID: "f76909b5-2ed7-476f-8f90-d8c9d168af6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.986157 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f35650b1-56b4-49fb-9ecc-9aa90a1386db" (UID: "f35650b1-56b4-49fb-9ecc-9aa90a1386db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.996012 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4qw4\" (UniqueName: \"kubernetes.io/projected/f76909b5-2ed7-476f-8f90-d8c9d168af6d-kube-api-access-q4qw4\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.996062 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t5jd\" (UniqueName: \"kubernetes.io/projected/f35650b1-56b4-49fb-9ecc-9aa90a1386db-kube-api-access-7t5jd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.996076 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.996097 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.996109 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.996122 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.997183 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-kzz4k"] Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.997585 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" podUID="2b276638-3e05-4295-825f-321552970394" containerName="dnsmasq-dns" containerID="cri-o://109925f98b98bd19bc310a0910c394cca6331ef46f59894bad1048ee96f57b9e" gracePeriod=10 Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.068696 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f76909b5-2ed7-476f-8f90-d8c9d168af6d" (UID: "f76909b5-2ed7-476f-8f90-d8c9d168af6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.075262 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-config-data" (OuterVolumeSpecName: "config-data") pod "f35650b1-56b4-49fb-9ecc-9aa90a1386db" (UID: "f35650b1-56b4-49fb-9ecc-9aa90a1386db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.100512 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.100553 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.373092 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" event={"ID":"f35650b1-56b4-49fb-9ecc-9aa90a1386db","Type":"ContainerDied","Data":"4b89b20bfbaf4f095edc6956fb3ba47586bb3c4fc669a11ca40dee8d933950d5"} Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.373368 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b89b20bfbaf4f095edc6956fb3ba47586bb3c4fc669a11ca40dee8d933950d5" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.373441 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.380205 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.380205 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-blnpq" event={"ID":"f76909b5-2ed7-476f-8f90-d8c9d168af6d","Type":"ContainerDied","Data":"9caada4f3046311c64b68a2a14859289d74554adf28625be0bc6ad2f9d554fdc"} Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.380251 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9caada4f3046311c64b68a2a14859289d74554adf28625be0bc6ad2f9d554fdc" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.386194 4804 generic.go:334] "Generic (PLEG): container finished" podID="2b276638-3e05-4295-825f-321552970394" containerID="109925f98b98bd19bc310a0910c394cca6331ef46f59894bad1048ee96f57b9e" exitCode=0 Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.386701 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" event={"ID":"2b276638-3e05-4295-825f-321552970394","Type":"ContainerDied","Data":"109925f98b98bd19bc310a0910c394cca6331ef46f59894bad1048ee96f57b9e"} Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.476382 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 11:43:58 crc kubenswrapper[4804]: E0128 11:43:58.481620 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76909b5-2ed7-476f-8f90-d8c9d168af6d" containerName="nova-manage" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.481662 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76909b5-2ed7-476f-8f90-d8c9d168af6d" containerName="nova-manage" Jan 28 11:43:58 crc kubenswrapper[4804]: E0128 11:43:58.481742 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35650b1-56b4-49fb-9ecc-9aa90a1386db" containerName="nova-cell1-conductor-db-sync" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.481751 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35650b1-56b4-49fb-9ecc-9aa90a1386db" containerName="nova-cell1-conductor-db-sync" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.482075 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76909b5-2ed7-476f-8f90-d8c9d168af6d" containerName="nova-manage" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.482108 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35650b1-56b4-49fb-9ecc-9aa90a1386db" containerName="nova-cell1-conductor-db-sync" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.482976 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.488998 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.490794 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.534523 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.536909 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.536953 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jpw\" (UniqueName: \"kubernetes.io/projected/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-kube-api-access-98jpw\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.537034 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.565071 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.617141 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.617139 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.638109 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-svc\") pod \"2b276638-3e05-4295-825f-321552970394\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.638338 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-sb\") pod \"2b276638-3e05-4295-825f-321552970394\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.638454 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-nb\") pod \"2b276638-3e05-4295-825f-321552970394\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.638491 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z7dm\" (UniqueName: \"kubernetes.io/projected/2b276638-3e05-4295-825f-321552970394-kube-api-access-8z7dm\") pod \"2b276638-3e05-4295-825f-321552970394\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.638643 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-swift-storage-0\") pod \"2b276638-3e05-4295-825f-321552970394\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.638687 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-config\") pod \"2b276638-3e05-4295-825f-321552970394\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.639029 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.639220 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.639252 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98jpw\" (UniqueName: \"kubernetes.io/projected/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-kube-api-access-98jpw\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.645106 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b276638-3e05-4295-825f-321552970394-kube-api-access-8z7dm" (OuterVolumeSpecName: "kube-api-access-8z7dm") pod "2b276638-3e05-4295-825f-321552970394" (UID: "2b276638-3e05-4295-825f-321552970394"). InnerVolumeSpecName "kube-api-access-8z7dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.646754 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.656793 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.659856 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98jpw\" (UniqueName: \"kubernetes.io/projected/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-kube-api-access-98jpw\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.721243 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b276638-3e05-4295-825f-321552970394" (UID: "2b276638-3e05-4295-825f-321552970394"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.725111 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.726319 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.732499 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.732751 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-log" containerID="cri-o://e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587" gracePeriod=30 Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.733247 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-api" containerID="cri-o://40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0" gracePeriod=30 Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.741236 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.741267 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z7dm\" (UniqueName: \"kubernetes.io/projected/2b276638-3e05-4295-825f-321552970394-kube-api-access-8z7dm\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.745650 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b276638-3e05-4295-825f-321552970394" (UID: "2b276638-3e05-4295-825f-321552970394"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.746168 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-config" (OuterVolumeSpecName: "config") pod "2b276638-3e05-4295-825f-321552970394" (UID: "2b276638-3e05-4295-825f-321552970394"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.749520 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b276638-3e05-4295-825f-321552970394" (UID: "2b276638-3e05-4295-825f-321552970394"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.762362 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2b276638-3e05-4295-825f-321552970394" (UID: "2b276638-3e05-4295-825f-321552970394"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.764452 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.816643 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.842928 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.842958 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.842969 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.842979 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.042132 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:43:59 crc kubenswrapper[4804]: W0128 11:43:59.298639 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e88e9db_b96d_4009_a4e6_ccbb5be53f85.slice/crio-7a39a79e7a20e9ba4fa85ecd18b271a0dbca751974fc6c0f7c6352d267b04dea WatchSource:0}: Error finding container 7a39a79e7a20e9ba4fa85ecd18b271a0dbca751974fc6c0f7c6352d267b04dea: Status 404 returned error can't find the container with id 7a39a79e7a20e9ba4fa85ecd18b271a0dbca751974fc6c0f7c6352d267b04dea Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.299128 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.399651 4804 generic.go:334] "Generic (PLEG): container finished" podID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerID="e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587" exitCode=143 Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.399727 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d67f7045-5136-4adb-af27-14ff32c4c2ea","Type":"ContainerDied","Data":"e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587"} Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.415608 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" event={"ID":"2b276638-3e05-4295-825f-321552970394","Type":"ContainerDied","Data":"5ac546ee98d5d28f78181c3225f300b9da32c9a6f7eeb78daa5bbc95aceb3b8d"} Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.415666 4804 scope.go:117] "RemoveContainer" containerID="109925f98b98bd19bc310a0910c394cca6331ef46f59894bad1048ee96f57b9e" Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.415697 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.417440 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8e88e9db-b96d-4009-a4e6-ccbb5be53f85","Type":"ContainerStarted","Data":"7a39a79e7a20e9ba4fa85ecd18b271a0dbca751974fc6c0f7c6352d267b04dea"} Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.447006 4804 scope.go:117] "RemoveContainer" containerID="7d55e8f0ae30cf6b17f9255210f13d604f097d0227761c71497f25b925dfda5d" Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.448280 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-kzz4k"] Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.457278 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-kzz4k"] Jan 28 11:44:00 crc kubenswrapper[4804]: I0128 11:44:00.431166 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8e88e9db-b96d-4009-a4e6-ccbb5be53f85","Type":"ContainerStarted","Data":"87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955"} Jan 28 11:44:00 crc kubenswrapper[4804]: I0128 11:44:00.431330 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-log" containerID="cri-o://578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca" gracePeriod=30 Jan 28 11:44:00 crc kubenswrapper[4804]: I0128 11:44:00.431367 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-metadata" containerID="cri-o://0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620" gracePeriod=30 Jan 28 11:44:00 crc kubenswrapper[4804]: I0128 11:44:00.432025 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" containerName="nova-scheduler-scheduler" containerID="cri-o://caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" gracePeriod=30 Jan 28 11:44:00 crc kubenswrapper[4804]: I0128 11:44:00.470816 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.470758777 podStartE2EDuration="2.470758777s" podCreationTimestamp="2026-01-28 11:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:00.46256654 +0000 UTC m=+1316.257446544" watchObservedRunningTime="2026-01-28 11:44:00.470758777 +0000 UTC m=+1316.265638761" Jan 28 11:44:00 crc kubenswrapper[4804]: I0128 11:44:00.927628 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b276638-3e05-4295-825f-321552970394" path="/var/lib/kubelet/pods/2b276638-3e05-4295-825f-321552970394/volumes" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.020480 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.095550 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-nova-metadata-tls-certs\") pod \"b27bc011-ed63-4b36-ae46-bba181d0989b\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.095825 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27bc011-ed63-4b36-ae46-bba181d0989b-logs\") pod \"b27bc011-ed63-4b36-ae46-bba181d0989b\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.095872 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-combined-ca-bundle\") pod \"b27bc011-ed63-4b36-ae46-bba181d0989b\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.096012 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcwjk\" (UniqueName: \"kubernetes.io/projected/b27bc011-ed63-4b36-ae46-bba181d0989b-kube-api-access-bcwjk\") pod \"b27bc011-ed63-4b36-ae46-bba181d0989b\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.096072 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-config-data\") pod \"b27bc011-ed63-4b36-ae46-bba181d0989b\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.098049 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27bc011-ed63-4b36-ae46-bba181d0989b-logs" (OuterVolumeSpecName: "logs") pod "b27bc011-ed63-4b36-ae46-bba181d0989b" (UID: "b27bc011-ed63-4b36-ae46-bba181d0989b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.112893 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27bc011-ed63-4b36-ae46-bba181d0989b-kube-api-access-bcwjk" (OuterVolumeSpecName: "kube-api-access-bcwjk") pod "b27bc011-ed63-4b36-ae46-bba181d0989b" (UID: "b27bc011-ed63-4b36-ae46-bba181d0989b"). InnerVolumeSpecName "kube-api-access-bcwjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.125451 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-config-data" (OuterVolumeSpecName: "config-data") pod "b27bc011-ed63-4b36-ae46-bba181d0989b" (UID: "b27bc011-ed63-4b36-ae46-bba181d0989b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.132085 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b27bc011-ed63-4b36-ae46-bba181d0989b" (UID: "b27bc011-ed63-4b36-ae46-bba181d0989b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.149930 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b27bc011-ed63-4b36-ae46-bba181d0989b" (UID: "b27bc011-ed63-4b36-ae46-bba181d0989b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.198663 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcwjk\" (UniqueName: \"kubernetes.io/projected/b27bc011-ed63-4b36-ae46-bba181d0989b-kube-api-access-bcwjk\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.198693 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.198702 4804 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.198712 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27bc011-ed63-4b36-ae46-bba181d0989b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.198720 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441017 4804 generic.go:334] "Generic (PLEG): container finished" podID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerID="0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620" exitCode=0 Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441059 4804 generic.go:334] "Generic (PLEG): container finished" podID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerID="578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca" exitCode=143 Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441082 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441122 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b27bc011-ed63-4b36-ae46-bba181d0989b","Type":"ContainerDied","Data":"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620"} Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441182 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b27bc011-ed63-4b36-ae46-bba181d0989b","Type":"ContainerDied","Data":"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca"} Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441195 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b27bc011-ed63-4b36-ae46-bba181d0989b","Type":"ContainerDied","Data":"7ad87104ee484966dd41a5010615ced3c765db64d29ab101c21b63bf343c102b"} Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441218 4804 scope.go:117] "RemoveContainer" containerID="0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.442781 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.467821 4804 scope.go:117] "RemoveContainer" containerID="578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.472950 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.483900 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.498193 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:01 crc kubenswrapper[4804]: E0128 11:44:01.498732 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b276638-3e05-4295-825f-321552970394" containerName="init" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.498751 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b276638-3e05-4295-825f-321552970394" containerName="init" Jan 28 11:44:01 crc kubenswrapper[4804]: E0128 11:44:01.498767 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b276638-3e05-4295-825f-321552970394" containerName="dnsmasq-dns" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.498774 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b276638-3e05-4295-825f-321552970394" containerName="dnsmasq-dns" Jan 28 11:44:01 crc kubenswrapper[4804]: E0128 11:44:01.498791 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-metadata" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.498799 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-metadata" Jan 28 11:44:01 crc kubenswrapper[4804]: E0128 11:44:01.498812 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-log" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.498819 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-log" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.499045 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-log" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.499063 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b276638-3e05-4295-825f-321552970394" containerName="dnsmasq-dns" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.499081 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-metadata" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.500265 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.504625 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.504836 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.506909 4804 scope.go:117] "RemoveContainer" containerID="0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.513085 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:01 crc kubenswrapper[4804]: E0128 11:44:01.531128 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620\": container with ID starting with 0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620 not found: ID does not exist" containerID="0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.531207 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620"} err="failed to get container status \"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620\": rpc error: code = NotFound desc = could not find container \"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620\": container with ID starting with 0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620 not found: ID does not exist" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.531258 4804 scope.go:117] "RemoveContainer" containerID="578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca" Jan 28 11:44:01 crc kubenswrapper[4804]: E0128 11:44:01.531996 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca\": container with ID starting with 578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca not found: ID does not exist" containerID="578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.532037 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca"} err="failed to get container status \"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca\": rpc error: code = NotFound desc = could not find container \"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca\": container with ID starting with 578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca not found: ID does not exist" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.532064 4804 scope.go:117] "RemoveContainer" containerID="0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.532345 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620"} err="failed to get container status \"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620\": rpc error: code = NotFound desc = could not find container \"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620\": container with ID starting with 0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620 not found: ID does not exist" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.532396 4804 scope.go:117] "RemoveContainer" containerID="578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.532697 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca"} err="failed to get container status \"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca\": rpc error: code = NotFound desc = could not find container \"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca\": container with ID starting with 578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca not found: ID does not exist" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.607675 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.607778 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcd72\" (UniqueName: \"kubernetes.io/projected/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-kube-api-access-zcd72\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.607806 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-config-data\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.607840 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-logs\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.607907 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.710020 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcd72\" (UniqueName: \"kubernetes.io/projected/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-kube-api-access-zcd72\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.710070 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-config-data\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.710111 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-logs\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.710181 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.710239 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.710774 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-logs\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.715003 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.715388 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-config-data\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.718369 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.726417 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcd72\" (UniqueName: \"kubernetes.io/projected/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-kube-api-access-zcd72\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.849770 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:02 crc kubenswrapper[4804]: I0128 11:44:02.289470 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:02 crc kubenswrapper[4804]: W0128 11:44:02.294602 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60dd1bc0_1015_4f2e_8fe0_4e33e2fe36d3.slice/crio-59b87bf7f61b635191f6dbdc4606dc428f7e447869edcc7c8d84ce1e273ec312 WatchSource:0}: Error finding container 59b87bf7f61b635191f6dbdc4606dc428f7e447869edcc7c8d84ce1e273ec312: Status 404 returned error can't find the container with id 59b87bf7f61b635191f6dbdc4606dc428f7e447869edcc7c8d84ce1e273ec312 Jan 28 11:44:02 crc kubenswrapper[4804]: I0128 11:44:02.451429 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3","Type":"ContainerStarted","Data":"59b87bf7f61b635191f6dbdc4606dc428f7e447869edcc7c8d84ce1e273ec312"} Jan 28 11:44:02 crc kubenswrapper[4804]: E0128 11:44:02.567297 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:44:02 crc kubenswrapper[4804]: E0128 11:44:02.569357 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:44:02 crc kubenswrapper[4804]: E0128 11:44:02.570869 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:44:02 crc kubenswrapper[4804]: E0128 11:44:02.570968 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" containerName="nova-scheduler-scheduler" Jan 28 11:44:02 crc kubenswrapper[4804]: I0128 11:44:02.924991 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" path="/var/lib/kubelet/pods/b27bc011-ed63-4b36-ae46-bba181d0989b/volumes" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.308405 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.452263 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-config-data\") pod \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.452425 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-combined-ca-bundle\") pod \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.452490 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h52j4\" (UniqueName: \"kubernetes.io/projected/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-kube-api-access-h52j4\") pod \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.456667 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-kube-api-access-h52j4" (OuterVolumeSpecName: "kube-api-access-h52j4") pod "4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" (UID: "4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2"). InnerVolumeSpecName "kube-api-access-h52j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.468019 4804 generic.go:334] "Generic (PLEG): container finished" podID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" exitCode=0 Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.468084 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2","Type":"ContainerDied","Data":"caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d"} Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.468111 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2","Type":"ContainerDied","Data":"523556f0a73e01c714755560786dfc8607783ee1ae02626c93c086b531e71383"} Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.468128 4804 scope.go:117] "RemoveContainer" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.468223 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.473333 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3","Type":"ContainerStarted","Data":"2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a"} Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.473365 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3","Type":"ContainerStarted","Data":"ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f"} Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.485060 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" (UID: "4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.517052 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-config-data" (OuterVolumeSpecName: "config-data") pod "4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" (UID: "4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.556263 4804 scope.go:117] "RemoveContainer" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" Jan 28 11:44:03 crc kubenswrapper[4804]: E0128 11:44:03.556813 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d\": container with ID starting with caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d not found: ID does not exist" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.556872 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d"} err="failed to get container status \"caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d\": rpc error: code = NotFound desc = could not find container \"caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d\": container with ID starting with caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d not found: ID does not exist" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.557268 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.557290 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.557317 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h52j4\" (UniqueName: \"kubernetes.io/projected/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-kube-api-access-h52j4\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.792460 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.79244191 podStartE2EDuration="2.79244191s" podCreationTimestamp="2026-01-28 11:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:03.510239504 +0000 UTC m=+1319.305119488" watchObservedRunningTime="2026-01-28 11:44:03.79244191 +0000 UTC m=+1319.587321894" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.797930 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.809127 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.821987 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:03 crc kubenswrapper[4804]: E0128 11:44:03.822495 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" containerName="nova-scheduler-scheduler" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.822519 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" containerName="nova-scheduler-scheduler" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.822784 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" containerName="nova-scheduler-scheduler" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.823634 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.825772 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.835274 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.964416 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rphbz\" (UniqueName: \"kubernetes.io/projected/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-kube-api-access-rphbz\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.964495 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.964517 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-config-data\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.067026 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rphbz\" (UniqueName: \"kubernetes.io/projected/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-kube-api-access-rphbz\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.067501 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.067809 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-config-data\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.073187 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.075017 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-config-data\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.087213 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rphbz\" (UniqueName: \"kubernetes.io/projected/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-kube-api-access-rphbz\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.143035 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.451480 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.488379 4804 generic.go:334] "Generic (PLEG): container finished" podID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerID="40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0" exitCode=0 Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.488455 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.488470 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d67f7045-5136-4adb-af27-14ff32c4c2ea","Type":"ContainerDied","Data":"40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0"} Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.488506 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d67f7045-5136-4adb-af27-14ff32c4c2ea","Type":"ContainerDied","Data":"7709d8911001699b6303fb7289a9df495a6178d282befad8adee9c09bde9fc52"} Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.488528 4804 scope.go:117] "RemoveContainer" containerID="40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.513551 4804 scope.go:117] "RemoveContainer" containerID="e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.532576 4804 scope.go:117] "RemoveContainer" containerID="40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0" Jan 28 11:44:04 crc kubenswrapper[4804]: E0128 11:44:04.533159 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0\": container with ID starting with 40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0 not found: ID does not exist" containerID="40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.533200 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0"} err="failed to get container status \"40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0\": rpc error: code = NotFound desc = could not find container \"40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0\": container with ID starting with 40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0 not found: ID does not exist" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.533226 4804 scope.go:117] "RemoveContainer" containerID="e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587" Jan 28 11:44:04 crc kubenswrapper[4804]: E0128 11:44:04.533617 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587\": container with ID starting with e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587 not found: ID does not exist" containerID="e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.533659 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587"} err="failed to get container status \"e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587\": rpc error: code = NotFound desc = could not find container \"e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587\": container with ID starting with e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587 not found: ID does not exist" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.577293 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-config-data\") pod \"d67f7045-5136-4adb-af27-14ff32c4c2ea\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.577390 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-combined-ca-bundle\") pod \"d67f7045-5136-4adb-af27-14ff32c4c2ea\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.577557 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d67f7045-5136-4adb-af27-14ff32c4c2ea-logs\") pod \"d67f7045-5136-4adb-af27-14ff32c4c2ea\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.577593 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56krv\" (UniqueName: \"kubernetes.io/projected/d67f7045-5136-4adb-af27-14ff32c4c2ea-kube-api-access-56krv\") pod \"d67f7045-5136-4adb-af27-14ff32c4c2ea\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.578178 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d67f7045-5136-4adb-af27-14ff32c4c2ea-logs" (OuterVolumeSpecName: "logs") pod "d67f7045-5136-4adb-af27-14ff32c4c2ea" (UID: "d67f7045-5136-4adb-af27-14ff32c4c2ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.581622 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d67f7045-5136-4adb-af27-14ff32c4c2ea-kube-api-access-56krv" (OuterVolumeSpecName: "kube-api-access-56krv") pod "d67f7045-5136-4adb-af27-14ff32c4c2ea" (UID: "d67f7045-5136-4adb-af27-14ff32c4c2ea"). InnerVolumeSpecName "kube-api-access-56krv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.603859 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-config-data" (OuterVolumeSpecName: "config-data") pod "d67f7045-5136-4adb-af27-14ff32c4c2ea" (UID: "d67f7045-5136-4adb-af27-14ff32c4c2ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.603964 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d67f7045-5136-4adb-af27-14ff32c4c2ea" (UID: "d67f7045-5136-4adb-af27-14ff32c4c2ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.651711 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:04 crc kubenswrapper[4804]: W0128 11:44:04.655268 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99ee8dc6_b4c2_46ef_a2a5_3ba27ff2f711.slice/crio-f3e4f04cf9239f80de0615a7c82510a14c24dd1f74e0a4378ba5cd12abed76cf WatchSource:0}: Error finding container f3e4f04cf9239f80de0615a7c82510a14c24dd1f74e0a4378ba5cd12abed76cf: Status 404 returned error can't find the container with id f3e4f04cf9239f80de0615a7c82510a14c24dd1f74e0a4378ba5cd12abed76cf Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.680628 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d67f7045-5136-4adb-af27-14ff32c4c2ea-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.680656 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56krv\" (UniqueName: \"kubernetes.io/projected/d67f7045-5136-4adb-af27-14ff32c4c2ea-kube-api-access-56krv\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.680669 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.680679 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.718354 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.826130 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.841785 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.850073 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:04 crc kubenswrapper[4804]: E0128 11:44:04.850487 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-api" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.850511 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-api" Jan 28 11:44:04 crc kubenswrapper[4804]: E0128 11:44:04.851197 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-log" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.851281 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-log" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.854951 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-api" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.854984 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-log" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.858778 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.862845 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.863313 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.927379 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" path="/var/lib/kubelet/pods/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2/volumes" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.928092 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" path="/var/lib/kubelet/pods/d67f7045-5136-4adb-af27-14ff32c4c2ea/volumes" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.993901 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc89087f-fead-4af8-b13c-67af8c77e7f7-logs\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.994106 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.994206 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxws4\" (UniqueName: \"kubernetes.io/projected/fc89087f-fead-4af8-b13c-67af8c77e7f7-kube-api-access-jxws4\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.994257 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-config-data\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.095548 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-config-data\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.095954 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc89087f-fead-4af8-b13c-67af8c77e7f7-logs\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.096294 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.096419 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxws4\" (UniqueName: \"kubernetes.io/projected/fc89087f-fead-4af8-b13c-67af8c77e7f7-kube-api-access-jxws4\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.096348 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc89087f-fead-4af8-b13c-67af8c77e7f7-logs\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.098159 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.101443 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.109782 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-config-data\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.113364 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxws4\" (UniqueName: \"kubernetes.io/projected/fc89087f-fead-4af8-b13c-67af8c77e7f7-kube-api-access-jxws4\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.205120 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.502346 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711","Type":"ContainerStarted","Data":"79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1"} Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.502685 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711","Type":"ContainerStarted","Data":"f3e4f04cf9239f80de0615a7c82510a14c24dd1f74e0a4378ba5cd12abed76cf"} Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.523365 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.523348081 podStartE2EDuration="2.523348081s" podCreationTimestamp="2026-01-28 11:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:05.517788687 +0000 UTC m=+1321.312668671" watchObservedRunningTime="2026-01-28 11:44:05.523348081 +0000 UTC m=+1321.318228065" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.658649 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:05 crc kubenswrapper[4804]: W0128 11:44:05.659298 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc89087f_fead_4af8_b13c_67af8c77e7f7.slice/crio-8826377baf8bb298bdd184ad91e3e80fe9c1df416ad9ff988cf49cad24cb348e WatchSource:0}: Error finding container 8826377baf8bb298bdd184ad91e3e80fe9c1df416ad9ff988cf49cad24cb348e: Status 404 returned error can't find the container with id 8826377baf8bb298bdd184ad91e3e80fe9c1df416ad9ff988cf49cad24cb348e Jan 28 11:44:06 crc kubenswrapper[4804]: I0128 11:44:06.520105 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc89087f-fead-4af8-b13c-67af8c77e7f7","Type":"ContainerStarted","Data":"3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b"} Jan 28 11:44:06 crc kubenswrapper[4804]: I0128 11:44:06.520338 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc89087f-fead-4af8-b13c-67af8c77e7f7","Type":"ContainerStarted","Data":"b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807"} Jan 28 11:44:06 crc kubenswrapper[4804]: I0128 11:44:06.520348 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc89087f-fead-4af8-b13c-67af8c77e7f7","Type":"ContainerStarted","Data":"8826377baf8bb298bdd184ad91e3e80fe9c1df416ad9ff988cf49cad24cb348e"} Jan 28 11:44:06 crc kubenswrapper[4804]: I0128 11:44:06.539738 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.539725508 podStartE2EDuration="2.539725508s" podCreationTimestamp="2026-01-28 11:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:06.538622613 +0000 UTC m=+1322.333502587" watchObservedRunningTime="2026-01-28 11:44:06.539725508 +0000 UTC m=+1322.334605492" Jan 28 11:44:06 crc kubenswrapper[4804]: I0128 11:44:06.850333 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:44:06 crc kubenswrapper[4804]: I0128 11:44:06.850463 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:44:08 crc kubenswrapper[4804]: I0128 11:44:08.669320 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:44:08 crc kubenswrapper[4804]: I0128 11:44:08.669863 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="97a6e239-25e0-4962-8c9d-4751ca2f4b1d" containerName="kube-state-metrics" containerID="cri-o://bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a" gracePeriod=30 Jan 28 11:44:08 crc kubenswrapper[4804]: I0128 11:44:08.853912 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.143794 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.157034 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.185166 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r249v\" (UniqueName: \"kubernetes.io/projected/97a6e239-25e0-4962-8c9d-4751ca2f4b1d-kube-api-access-r249v\") pod \"97a6e239-25e0-4962-8c9d-4751ca2f4b1d\" (UID: \"97a6e239-25e0-4962-8c9d-4751ca2f4b1d\") " Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.191586 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a6e239-25e0-4962-8c9d-4751ca2f4b1d-kube-api-access-r249v" (OuterVolumeSpecName: "kube-api-access-r249v") pod "97a6e239-25e0-4962-8c9d-4751ca2f4b1d" (UID: "97a6e239-25e0-4962-8c9d-4751ca2f4b1d"). InnerVolumeSpecName "kube-api-access-r249v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.288523 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r249v\" (UniqueName: \"kubernetes.io/projected/97a6e239-25e0-4962-8c9d-4751ca2f4b1d-kube-api-access-r249v\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.552828 4804 generic.go:334] "Generic (PLEG): container finished" podID="97a6e239-25e0-4962-8c9d-4751ca2f4b1d" containerID="bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a" exitCode=2 Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.552917 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97a6e239-25e0-4962-8c9d-4751ca2f4b1d","Type":"ContainerDied","Data":"bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a"} Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.552968 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97a6e239-25e0-4962-8c9d-4751ca2f4b1d","Type":"ContainerDied","Data":"e28d6e15bb8b7864184a210b8a21979cfee4c6a5d5b942d21fe32b6ed7b6e02c"} Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.552989 4804 scope.go:117] "RemoveContainer" containerID="bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.553156 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.598611 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.600350 4804 scope.go:117] "RemoveContainer" containerID="bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a" Jan 28 11:44:09 crc kubenswrapper[4804]: E0128 11:44:09.607347 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a\": container with ID starting with bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a not found: ID does not exist" containerID="bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.607409 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a"} err="failed to get container status \"bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a\": rpc error: code = NotFound desc = could not find container \"bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a\": container with ID starting with bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a not found: ID does not exist" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.608676 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.619985 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:44:09 crc kubenswrapper[4804]: E0128 11:44:09.620864 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a6e239-25e0-4962-8c9d-4751ca2f4b1d" containerName="kube-state-metrics" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.620915 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a6e239-25e0-4962-8c9d-4751ca2f4b1d" containerName="kube-state-metrics" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.621172 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a6e239-25e0-4962-8c9d-4751ca2f4b1d" containerName="kube-state-metrics" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.621938 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.624144 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.624300 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.627440 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.697121 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.697315 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.697355 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6xhn\" (UniqueName: \"kubernetes.io/projected/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-api-access-s6xhn\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.697522 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.799801 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.800183 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.800283 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.800304 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6xhn\" (UniqueName: \"kubernetes.io/projected/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-api-access-s6xhn\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.804874 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.805293 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.813664 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.816427 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6xhn\" (UniqueName: \"kubernetes.io/projected/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-api-access-s6xhn\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.953842 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.367839 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.368724 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-central-agent" containerID="cri-o://10a183c3b80dc4ee328a053d5a9ee94912af618329f720155eaad991e5227301" gracePeriod=30 Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.368799 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="sg-core" containerID="cri-o://1bfe46f5b1d876490985e1a3f6d0da59f78789571798235a855e1d56bee636d6" gracePeriod=30 Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.368806 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="proxy-httpd" containerID="cri-o://2509bf4b13f32a9d4b208c9c897e5e615b25378e75e4b250d7877c40fc99630f" gracePeriod=30 Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.369414 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-notification-agent" containerID="cri-o://1d8aa855c628bc141777e622e822953c4716ead40c6674117561e04a96dece56" gracePeriod=30 Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.420183 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.421711 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.564205 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3580297-d401-446c-818f-fbb89e50c757" containerID="1bfe46f5b1d876490985e1a3f6d0da59f78789571798235a855e1d56bee636d6" exitCode=2 Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.564273 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerDied","Data":"1bfe46f5b1d876490985e1a3f6d0da59f78789571798235a855e1d56bee636d6"} Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.565800 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def","Type":"ContainerStarted","Data":"21e20525ca7a6c58cab2832c14cfe80c2d4514f39f84f4eb3108c5f05572b1bf"} Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.928724 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a6e239-25e0-4962-8c9d-4751ca2f4b1d" path="/var/lib/kubelet/pods/97a6e239-25e0-4962-8c9d-4751ca2f4b1d/volumes" Jan 28 11:44:11 crc kubenswrapper[4804]: I0128 11:44:11.577812 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3580297-d401-446c-818f-fbb89e50c757" containerID="2509bf4b13f32a9d4b208c9c897e5e615b25378e75e4b250d7877c40fc99630f" exitCode=0 Jan 28 11:44:11 crc kubenswrapper[4804]: I0128 11:44:11.577843 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3580297-d401-446c-818f-fbb89e50c757" containerID="10a183c3b80dc4ee328a053d5a9ee94912af618329f720155eaad991e5227301" exitCode=0 Jan 28 11:44:11 crc kubenswrapper[4804]: I0128 11:44:11.577854 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerDied","Data":"2509bf4b13f32a9d4b208c9c897e5e615b25378e75e4b250d7877c40fc99630f"} Jan 28 11:44:11 crc kubenswrapper[4804]: I0128 11:44:11.577920 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerDied","Data":"10a183c3b80dc4ee328a053d5a9ee94912af618329f720155eaad991e5227301"} Jan 28 11:44:11 crc kubenswrapper[4804]: I0128 11:44:11.851068 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 11:44:11 crc kubenswrapper[4804]: I0128 11:44:11.851119 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.582030 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.582413 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.591701 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3580297-d401-446c-818f-fbb89e50c757" containerID="1d8aa855c628bc141777e622e822953c4716ead40c6674117561e04a96dece56" exitCode=0 Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.591758 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerDied","Data":"1d8aa855c628bc141777e622e822953c4716ead40c6674117561e04a96dece56"} Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.593121 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def","Type":"ContainerStarted","Data":"ccddc2c43c4ec70519371e0d1f04a70d45a5a33973b05eef166b2e2189b30710"} Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.593261 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.619492 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.293222735 podStartE2EDuration="3.619474955s" podCreationTimestamp="2026-01-28 11:44:09 +0000 UTC" firstStartedPulling="2026-01-28 11:44:10.421488098 +0000 UTC m=+1326.216368082" lastFinishedPulling="2026-01-28 11:44:11.747740318 +0000 UTC m=+1327.542620302" observedRunningTime="2026-01-28 11:44:12.607631164 +0000 UTC m=+1328.402511148" watchObservedRunningTime="2026-01-28 11:44:12.619474955 +0000 UTC m=+1328.414354929" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.866212 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.866213 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.886192 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.900827 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-config-data\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.900930 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-run-httpd\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.901159 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmm2q\" (UniqueName: \"kubernetes.io/projected/f3580297-d401-446c-818f-fbb89e50c757-kube-api-access-rmm2q\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.901258 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-scripts\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.901410 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.901534 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-combined-ca-bundle\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.901613 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-sg-core-conf-yaml\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.901656 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-log-httpd\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.902432 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.903456 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.907808 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-scripts" (OuterVolumeSpecName: "scripts") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.908745 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3580297-d401-446c-818f-fbb89e50c757-kube-api-access-rmm2q" (OuterVolumeSpecName: "kube-api-access-rmm2q") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "kube-api-access-rmm2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.990721 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.005264 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.005444 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.005558 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.005692 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmm2q\" (UniqueName: \"kubernetes.io/projected/f3580297-d401-446c-818f-fbb89e50c757-kube-api-access-rmm2q\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.040093 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.061962 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-config-data" (OuterVolumeSpecName: "config-data") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.107602 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.107643 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.607032 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerDied","Data":"14a4c38d4f2d56c74c58972e9ed2fa41c69a7c52d8e83ec184580c877203f8a9"} Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.607112 4804 scope.go:117] "RemoveContainer" containerID="2509bf4b13f32a9d4b208c9c897e5e615b25378e75e4b250d7877c40fc99630f" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.607191 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.645152 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.645157 4804 scope.go:117] "RemoveContainer" containerID="1bfe46f5b1d876490985e1a3f6d0da59f78789571798235a855e1d56bee636d6" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.664269 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.673935 4804 scope.go:117] "RemoveContainer" containerID="1d8aa855c628bc141777e622e822953c4716ead40c6674117561e04a96dece56" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.687786 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:13 crc kubenswrapper[4804]: E0128 11:44:13.688228 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="sg-core" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688240 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="sg-core" Jan 28 11:44:13 crc kubenswrapper[4804]: E0128 11:44:13.688256 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="proxy-httpd" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688262 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="proxy-httpd" Jan 28 11:44:13 crc kubenswrapper[4804]: E0128 11:44:13.688275 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-notification-agent" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688282 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-notification-agent" Jan 28 11:44:13 crc kubenswrapper[4804]: E0128 11:44:13.688304 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-central-agent" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688310 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-central-agent" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688489 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="sg-core" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688506 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-notification-agent" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688516 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="proxy-httpd" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688525 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-central-agent" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.690916 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.696784 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.697032 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.697222 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.697261 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.716953 4804 scope.go:117] "RemoveContainer" containerID="10a183c3b80dc4ee328a053d5a9ee94912af618329f720155eaad991e5227301" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.724058 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-log-httpd\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.724203 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.724923 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-run-httpd\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.724954 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjp9\" (UniqueName: \"kubernetes.io/projected/88f7d3c2-ab36-467f-8ad5-0e899f804eca-kube-api-access-chjp9\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.725057 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-config-data\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.725199 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.725237 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-scripts\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.725829 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.828992 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829059 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-scripts\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829098 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829157 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-log-httpd\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829177 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829251 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-run-httpd\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829281 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjp9\" (UniqueName: \"kubernetes.io/projected/88f7d3c2-ab36-467f-8ad5-0e899f804eca-kube-api-access-chjp9\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829322 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-config-data\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.830357 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-log-httpd\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.830663 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-run-httpd\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.833685 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.833806 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.835199 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.835415 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-config-data\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.840322 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-scripts\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.858058 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjp9\" (UniqueName: \"kubernetes.io/projected/88f7d3c2-ab36-467f-8ad5-0e899f804eca-kube-api-access-chjp9\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.020970 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.143940 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.170214 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.463437 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:14 crc kubenswrapper[4804]: W0128 11:44:14.464649 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88f7d3c2_ab36_467f_8ad5_0e899f804eca.slice/crio-5b68752056e51dbeeb9e5943373f93474614d0a923a621405962becd6490ba78 WatchSource:0}: Error finding container 5b68752056e51dbeeb9e5943373f93474614d0a923a621405962becd6490ba78: Status 404 returned error can't find the container with id 5b68752056e51dbeeb9e5943373f93474614d0a923a621405962becd6490ba78 Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.619161 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerStarted","Data":"5b68752056e51dbeeb9e5943373f93474614d0a923a621405962becd6490ba78"} Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.646129 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.925812 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3580297-d401-446c-818f-fbb89e50c757" path="/var/lib/kubelet/pods/f3580297-d401-446c-818f-fbb89e50c757/volumes" Jan 28 11:44:15 crc kubenswrapper[4804]: I0128 11:44:15.206593 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:44:15 crc kubenswrapper[4804]: I0128 11:44:15.207805 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:44:16 crc kubenswrapper[4804]: I0128 11:44:16.288094 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 11:44:16 crc kubenswrapper[4804]: I0128 11:44:16.288098 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 11:44:16 crc kubenswrapper[4804]: I0128 11:44:16.636256 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerStarted","Data":"6ded14998fd4372415a5c145385be7bd59ee9decf6e259549de3128029c8700a"} Jan 28 11:44:18 crc kubenswrapper[4804]: I0128 11:44:18.673906 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerStarted","Data":"403358f8f419c7811efcc028282e1eb89fb15ba997a4f33821c45033f374f2b8"} Jan 28 11:44:19 crc kubenswrapper[4804]: I0128 11:44:19.686203 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerStarted","Data":"b2a363c153285938800237b851bb51663040772805a005ecd0b0a83e28b140b9"} Jan 28 11:44:19 crc kubenswrapper[4804]: I0128 11:44:19.968181 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 11:44:21 crc kubenswrapper[4804]: I0128 11:44:21.856248 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 11:44:21 crc kubenswrapper[4804]: I0128 11:44:21.856919 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 11:44:21 crc kubenswrapper[4804]: I0128 11:44:21.860544 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 11:44:21 crc kubenswrapper[4804]: I0128 11:44:21.862866 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 11:44:22 crc kubenswrapper[4804]: I0128 11:44:22.713991 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerStarted","Data":"71f4e929646b37e6528bf1990153715e310d17c0e8eb1c00810ed174d44e4221"} Jan 28 11:44:22 crc kubenswrapper[4804]: I0128 11:44:22.716218 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:44:22 crc kubenswrapper[4804]: I0128 11:44:22.716237 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"84b18213-5ffe-40a4-b2f7-a8bb117d9a79","Type":"ContainerDied","Data":"a3ec7c22b5141bccbf8d04497f2060dbc86f6d11337e8f669677bbb61ab18959"} Jan 28 11:44:22 crc kubenswrapper[4804]: I0128 11:44:22.715998 4804 generic.go:334] "Generic (PLEG): container finished" podID="84b18213-5ffe-40a4-b2f7-a8bb117d9a79" containerID="a3ec7c22b5141bccbf8d04497f2060dbc86f6d11337e8f669677bbb61ab18959" exitCode=137 Jan 28 11:44:22 crc kubenswrapper[4804]: I0128 11:44:22.738121 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.509004686 podStartE2EDuration="9.738101813s" podCreationTimestamp="2026-01-28 11:44:13 +0000 UTC" firstStartedPulling="2026-01-28 11:44:14.466747589 +0000 UTC m=+1330.261627573" lastFinishedPulling="2026-01-28 11:44:21.695844716 +0000 UTC m=+1337.490724700" observedRunningTime="2026-01-28 11:44:22.73129537 +0000 UTC m=+1338.526175364" watchObservedRunningTime="2026-01-28 11:44:22.738101813 +0000 UTC m=+1338.532981787" Jan 28 11:44:22 crc kubenswrapper[4804]: I0128 11:44:22.848729 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.017990 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-config-data\") pod \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.018083 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2dpn\" (UniqueName: \"kubernetes.io/projected/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-kube-api-access-z2dpn\") pod \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.018250 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-combined-ca-bundle\") pod \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.022914 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-kube-api-access-z2dpn" (OuterVolumeSpecName: "kube-api-access-z2dpn") pod "84b18213-5ffe-40a4-b2f7-a8bb117d9a79" (UID: "84b18213-5ffe-40a4-b2f7-a8bb117d9a79"). InnerVolumeSpecName "kube-api-access-z2dpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.044356 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84b18213-5ffe-40a4-b2f7-a8bb117d9a79" (UID: "84b18213-5ffe-40a4-b2f7-a8bb117d9a79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.054825 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-config-data" (OuterVolumeSpecName: "config-data") pod "84b18213-5ffe-40a4-b2f7-a8bb117d9a79" (UID: "84b18213-5ffe-40a4-b2f7-a8bb117d9a79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.120522 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.120789 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2dpn\" (UniqueName: \"kubernetes.io/projected/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-kube-api-access-z2dpn\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.120867 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.735475 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.735706 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"84b18213-5ffe-40a4-b2f7-a8bb117d9a79","Type":"ContainerDied","Data":"30cd19d729fe0a8f365f4576d67a9396141f36b3555091744e62104b74b1d641"} Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.736498 4804 scope.go:117] "RemoveContainer" containerID="a3ec7c22b5141bccbf8d04497f2060dbc86f6d11337e8f669677bbb61ab18959" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.788395 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.805169 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.816104 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:44:23 crc kubenswrapper[4804]: E0128 11:44:23.816567 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b18213-5ffe-40a4-b2f7-a8bb117d9a79" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.816585 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b18213-5ffe-40a4-b2f7-a8bb117d9a79" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.816798 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b18213-5ffe-40a4-b2f7-a8bb117d9a79" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.818355 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.825189 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.825679 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.825868 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.842699 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.941962 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.942048 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.942076 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6sv\" (UniqueName: \"kubernetes.io/projected/b390f543-98da-46ea-b3b9-f68c09d94c03-kube-api-access-ps6sv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.942477 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.942526 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.044137 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.044203 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.044252 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.044330 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.044360 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps6sv\" (UniqueName: \"kubernetes.io/projected/b390f543-98da-46ea-b3b9-f68c09d94c03-kube-api-access-ps6sv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.049635 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.050063 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.050413 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.050857 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.086153 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps6sv\" (UniqueName: \"kubernetes.io/projected/b390f543-98da-46ea-b3b9-f68c09d94c03-kube-api-access-ps6sv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.145450 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.584117 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.744185 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b390f543-98da-46ea-b3b9-f68c09d94c03","Type":"ContainerStarted","Data":"3d6b0e8a60f6d64a7898369a58401894b066ffaf5a9e53838f90370bc8ff4841"} Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.927725 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84b18213-5ffe-40a4-b2f7-a8bb117d9a79" path="/var/lib/kubelet/pods/84b18213-5ffe-40a4-b2f7-a8bb117d9a79/volumes" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.209919 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.210511 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.211617 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.214363 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.757773 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b390f543-98da-46ea-b3b9-f68c09d94c03","Type":"ContainerStarted","Data":"67b5e53f1eb1c67a490461931a62e093efa88d74afa9352d1282f6ea7d2e449a"} Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.758539 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.765621 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.779584 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.779565671 podStartE2EDuration="2.779565671s" podCreationTimestamp="2026-01-28 11:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:25.776692491 +0000 UTC m=+1341.571572475" watchObservedRunningTime="2026-01-28 11:44:25.779565671 +0000 UTC m=+1341.574445655" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.008319 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-j9ld2"] Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.012602 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.026719 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-j9ld2"] Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.192041 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.192087 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-config\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.192111 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.192280 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mpf5\" (UniqueName: \"kubernetes.io/projected/f7cab05f-efa6-4a74-920b-96f8f30f1736-kube-api-access-4mpf5\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.192454 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.192524 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.294543 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.294590 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-config\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.294613 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.294651 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mpf5\" (UniqueName: \"kubernetes.io/projected/f7cab05f-efa6-4a74-920b-96f8f30f1736-kube-api-access-4mpf5\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.294696 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.294725 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.295436 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-config\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.295471 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.296011 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.296213 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.296776 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.315841 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mpf5\" (UniqueName: \"kubernetes.io/projected/f7cab05f-efa6-4a74-920b-96f8f30f1736-kube-api-access-4mpf5\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.334326 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.826524 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-j9ld2"] Jan 28 11:44:26 crc kubenswrapper[4804]: W0128 11:44:26.836112 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7cab05f_efa6_4a74_920b_96f8f30f1736.slice/crio-02c33a94fee5850cffdcc1376e17adfb105d8ad41566dcf54330f1591b79ad5e WatchSource:0}: Error finding container 02c33a94fee5850cffdcc1376e17adfb105d8ad41566dcf54330f1591b79ad5e: Status 404 returned error can't find the container with id 02c33a94fee5850cffdcc1376e17adfb105d8ad41566dcf54330f1591b79ad5e Jan 28 11:44:27 crc kubenswrapper[4804]: I0128 11:44:27.778212 4804 generic.go:334] "Generic (PLEG): container finished" podID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerID="2cf37cb975241a8023292503844e50e2fd76dae6622e27d3a7bdc8476283ee2c" exitCode=0 Jan 28 11:44:27 crc kubenswrapper[4804]: I0128 11:44:27.779826 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" event={"ID":"f7cab05f-efa6-4a74-920b-96f8f30f1736","Type":"ContainerDied","Data":"2cf37cb975241a8023292503844e50e2fd76dae6622e27d3a7bdc8476283ee2c"} Jan 28 11:44:27 crc kubenswrapper[4804]: I0128 11:44:27.779859 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" event={"ID":"f7cab05f-efa6-4a74-920b-96f8f30f1736","Type":"ContainerStarted","Data":"02c33a94fee5850cffdcc1376e17adfb105d8ad41566dcf54330f1591b79ad5e"} Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.073389 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.073644 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-central-agent" containerID="cri-o://6ded14998fd4372415a5c145385be7bd59ee9decf6e259549de3128029c8700a" gracePeriod=30 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.073875 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="proxy-httpd" containerID="cri-o://71f4e929646b37e6528bf1990153715e310d17c0e8eb1c00810ed174d44e4221" gracePeriod=30 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.074109 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-notification-agent" containerID="cri-o://403358f8f419c7811efcc028282e1eb89fb15ba997a4f33821c45033f374f2b8" gracePeriod=30 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.074161 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="sg-core" containerID="cri-o://b2a363c153285938800237b851bb51663040772805a005ecd0b0a83e28b140b9" gracePeriod=30 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.488093 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.796645 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" event={"ID":"f7cab05f-efa6-4a74-920b-96f8f30f1736","Type":"ContainerStarted","Data":"91cc51ff2b7594ba6b7c5b83ef291bdad1767dd300aa27e2d6fe9a547161ad93"} Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.797095 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800093 4804 generic.go:334] "Generic (PLEG): container finished" podID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerID="71f4e929646b37e6528bf1990153715e310d17c0e8eb1c00810ed174d44e4221" exitCode=0 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800124 4804 generic.go:334] "Generic (PLEG): container finished" podID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerID="b2a363c153285938800237b851bb51663040772805a005ecd0b0a83e28b140b9" exitCode=2 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800133 4804 generic.go:334] "Generic (PLEG): container finished" podID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerID="403358f8f419c7811efcc028282e1eb89fb15ba997a4f33821c45033f374f2b8" exitCode=0 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800143 4804 generic.go:334] "Generic (PLEG): container finished" podID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerID="6ded14998fd4372415a5c145385be7bd59ee9decf6e259549de3128029c8700a" exitCode=0 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800254 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerDied","Data":"71f4e929646b37e6528bf1990153715e310d17c0e8eb1c00810ed174d44e4221"} Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800289 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerDied","Data":"b2a363c153285938800237b851bb51663040772805a005ecd0b0a83e28b140b9"} Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800302 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerDied","Data":"403358f8f419c7811efcc028282e1eb89fb15ba997a4f33821c45033f374f2b8"} Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800314 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerDied","Data":"6ded14998fd4372415a5c145385be7bd59ee9decf6e259549de3128029c8700a"} Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800340 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-log" containerID="cri-o://b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807" gracePeriod=30 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800545 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-api" containerID="cri-o://3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b" gracePeriod=30 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.832941 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" podStartSLOduration=3.832925562 podStartE2EDuration="3.832925562s" podCreationTimestamp="2026-01-28 11:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:28.820250286 +0000 UTC m=+1344.615130270" watchObservedRunningTime="2026-01-28 11:44:28.832925562 +0000 UTC m=+1344.627805546" Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.911728 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049257 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-sg-core-conf-yaml\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049327 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-log-httpd\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049493 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-run-httpd\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049630 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-combined-ca-bundle\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049670 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-ceilometer-tls-certs\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049700 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-scripts\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049715 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-config-data\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049739 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chjp9\" (UniqueName: \"kubernetes.io/projected/88f7d3c2-ab36-467f-8ad5-0e899f804eca-kube-api-access-chjp9\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049770 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049838 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.050246 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.050263 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.058035 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-scripts" (OuterVolumeSpecName: "scripts") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.058056 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f7d3c2-ab36-467f-8ad5-0e899f804eca-kube-api-access-chjp9" (OuterVolumeSpecName: "kube-api-access-chjp9") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "kube-api-access-chjp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.077529 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.121572 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.136225 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.146320 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.152274 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.152307 4804 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.152319 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.152328 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chjp9\" (UniqueName: \"kubernetes.io/projected/88f7d3c2-ab36-467f-8ad5-0e899f804eca-kube-api-access-chjp9\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.152337 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.155700 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-config-data" (OuterVolumeSpecName: "config-data") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.253976 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.809474 4804 generic.go:334] "Generic (PLEG): container finished" podID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerID="b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807" exitCode=143 Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.809532 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc89087f-fead-4af8-b13c-67af8c77e7f7","Type":"ContainerDied","Data":"b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807"} Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.812100 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerDied","Data":"5b68752056e51dbeeb9e5943373f93474614d0a923a621405962becd6490ba78"} Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.812145 4804 scope.go:117] "RemoveContainer" containerID="71f4e929646b37e6528bf1990153715e310d17c0e8eb1c00810ed174d44e4221" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.812116 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.854144 4804 scope.go:117] "RemoveContainer" containerID="b2a363c153285938800237b851bb51663040772805a005ecd0b0a83e28b140b9" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.869144 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.878999 4804 scope.go:117] "RemoveContainer" containerID="403358f8f419c7811efcc028282e1eb89fb15ba997a4f33821c45033f374f2b8" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.880006 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.897398 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:29 crc kubenswrapper[4804]: E0128 11:44:29.898126 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="proxy-httpd" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.898224 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="proxy-httpd" Jan 28 11:44:29 crc kubenswrapper[4804]: E0128 11:44:29.898315 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-notification-agent" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.898393 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-notification-agent" Jan 28 11:44:29 crc kubenswrapper[4804]: E0128 11:44:29.898480 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-central-agent" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.898563 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-central-agent" Jan 28 11:44:29 crc kubenswrapper[4804]: E0128 11:44:29.898651 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="sg-core" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.898725 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="sg-core" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.899054 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-notification-agent" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.899162 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="proxy-httpd" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.899252 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-central-agent" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.899342 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="sg-core" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.901418 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.903728 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.904052 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.904433 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.908049 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.908864 4804 scope.go:117] "RemoveContainer" containerID="6ded14998fd4372415a5c145385be7bd59ee9decf6e259549de3128029c8700a" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967323 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967373 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967415 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hldj2\" (UniqueName: \"kubernetes.io/projected/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-kube-api-access-hldj2\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967449 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-run-httpd\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967479 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-scripts\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967610 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967649 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-log-httpd\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967734 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-config-data\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.997677 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:29 crc kubenswrapper[4804]: E0128 11:44:29.998531 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-hldj2 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="9716bec9-6c7d-49e3-8c79-ba4c723d8be9" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.069432 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hldj2\" (UniqueName: \"kubernetes.io/projected/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-kube-api-access-hldj2\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.069487 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-run-httpd\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.069541 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-scripts\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.070060 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-run-httpd\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.071798 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.072269 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-log-httpd\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.072395 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-config-data\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.072602 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.072635 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.074195 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-log-httpd\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.076651 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-scripts\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.079185 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-config-data\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.079327 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.084639 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.085915 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hldj2\" (UniqueName: \"kubernetes.io/projected/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-kube-api-access-hldj2\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.088460 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.821831 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.834540 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.885773 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-config-data\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886032 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hldj2\" (UniqueName: \"kubernetes.io/projected/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-kube-api-access-hldj2\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886129 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-combined-ca-bundle\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886167 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-run-httpd\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886219 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-sg-core-conf-yaml\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886252 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-scripts\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886297 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-ceilometer-tls-certs\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886320 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-log-httpd\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886927 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886941 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.891572 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-scripts" (OuterVolumeSpecName: "scripts") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.891714 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-kube-api-access-hldj2" (OuterVolumeSpecName: "kube-api-access-hldj2") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "kube-api-access-hldj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.891731 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-config-data" (OuterVolumeSpecName: "config-data") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.892120 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.892536 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.893066 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.927864 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" path="/var/lib/kubelet/pods/88f7d3c2-ab36-467f-8ad5-0e899f804eca/volumes" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990002 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hldj2\" (UniqueName: \"kubernetes.io/projected/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-kube-api-access-hldj2\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990038 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990051 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990062 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990072 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990113 4804 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990124 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990134 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.833958 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.892866 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.905083 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.919168 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.921996 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.924138 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.924335 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.925084 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.929542 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.010394 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.010491 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.010601 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-scripts\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.010636 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-log-httpd\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.011121 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-run-httpd\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.011173 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-config-data\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.011295 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.011404 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldd52\" (UniqueName: \"kubernetes.io/projected/90f5a2ef-6224-4af8-8bba-32c689a960f1-kube-api-access-ldd52\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.117851 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-run-httpd\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.117920 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-config-data\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.117954 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.117996 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldd52\" (UniqueName: \"kubernetes.io/projected/90f5a2ef-6224-4af8-8bba-32c689a960f1-kube-api-access-ldd52\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.118058 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.118096 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.118129 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-scripts\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.118152 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-log-httpd\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.118319 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-run-httpd\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.118508 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-log-httpd\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.123286 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.123374 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.134348 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-config-data\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.134545 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-scripts\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.135608 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.137489 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldd52\" (UniqueName: \"kubernetes.io/projected/90f5a2ef-6224-4af8-8bba-32c689a960f1-kube-api-access-ldd52\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.241674 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.384835 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.524969 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-combined-ca-bundle\") pod \"fc89087f-fead-4af8-b13c-67af8c77e7f7\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.525125 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-config-data\") pod \"fc89087f-fead-4af8-b13c-67af8c77e7f7\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.525156 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc89087f-fead-4af8-b13c-67af8c77e7f7-logs\") pod \"fc89087f-fead-4af8-b13c-67af8c77e7f7\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.525187 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxws4\" (UniqueName: \"kubernetes.io/projected/fc89087f-fead-4af8-b13c-67af8c77e7f7-kube-api-access-jxws4\") pod \"fc89087f-fead-4af8-b13c-67af8c77e7f7\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.528612 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc89087f-fead-4af8-b13c-67af8c77e7f7-logs" (OuterVolumeSpecName: "logs") pod "fc89087f-fead-4af8-b13c-67af8c77e7f7" (UID: "fc89087f-fead-4af8-b13c-67af8c77e7f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.533075 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc89087f-fead-4af8-b13c-67af8c77e7f7-kube-api-access-jxws4" (OuterVolumeSpecName: "kube-api-access-jxws4") pod "fc89087f-fead-4af8-b13c-67af8c77e7f7" (UID: "fc89087f-fead-4af8-b13c-67af8c77e7f7"). InnerVolumeSpecName "kube-api-access-jxws4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.557865 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc89087f-fead-4af8-b13c-67af8c77e7f7" (UID: "fc89087f-fead-4af8-b13c-67af8c77e7f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.561224 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-config-data" (OuterVolumeSpecName: "config-data") pod "fc89087f-fead-4af8-b13c-67af8c77e7f7" (UID: "fc89087f-fead-4af8-b13c-67af8c77e7f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.628073 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.628408 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.628420 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc89087f-fead-4af8-b13c-67af8c77e7f7-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.628428 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxws4\" (UniqueName: \"kubernetes.io/projected/fc89087f-fead-4af8-b13c-67af8c77e7f7-kube-api-access-jxws4\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.720012 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:32 crc kubenswrapper[4804]: W0128 11:44:32.735896 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90f5a2ef_6224_4af8_8bba_32c689a960f1.slice/crio-84fd0aed08998b6cb545affdc4c5c0c2a24e6d8450e334aba33aae6ec80b288a WatchSource:0}: Error finding container 84fd0aed08998b6cb545affdc4c5c0c2a24e6d8450e334aba33aae6ec80b288a: Status 404 returned error can't find the container with id 84fd0aed08998b6cb545affdc4c5c0c2a24e6d8450e334aba33aae6ec80b288a Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.844819 4804 generic.go:334] "Generic (PLEG): container finished" podID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerID="3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b" exitCode=0 Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.844910 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc89087f-fead-4af8-b13c-67af8c77e7f7","Type":"ContainerDied","Data":"3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b"} Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.844945 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc89087f-fead-4af8-b13c-67af8c77e7f7","Type":"ContainerDied","Data":"8826377baf8bb298bdd184ad91e3e80fe9c1df416ad9ff988cf49cad24cb348e"} Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.844967 4804 scope.go:117] "RemoveContainer" containerID="3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.845091 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.849455 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerStarted","Data":"84fd0aed08998b6cb545affdc4c5c0c2a24e6d8450e334aba33aae6ec80b288a"} Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.876292 4804 scope.go:117] "RemoveContainer" containerID="b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.879307 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.897975 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.900576 4804 scope.go:117] "RemoveContainer" containerID="3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b" Jan 28 11:44:32 crc kubenswrapper[4804]: E0128 11:44:32.902114 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b\": container with ID starting with 3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b not found: ID does not exist" containerID="3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.902141 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b"} err="failed to get container status \"3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b\": rpc error: code = NotFound desc = could not find container \"3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b\": container with ID starting with 3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b not found: ID does not exist" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.902160 4804 scope.go:117] "RemoveContainer" containerID="b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807" Jan 28 11:44:32 crc kubenswrapper[4804]: E0128 11:44:32.902384 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807\": container with ID starting with b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807 not found: ID does not exist" containerID="b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.902400 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807"} err="failed to get container status \"b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807\": rpc error: code = NotFound desc = could not find container \"b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807\": container with ID starting with b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807 not found: ID does not exist" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.909221 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:32 crc kubenswrapper[4804]: E0128 11:44:32.909733 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-log" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.909756 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-log" Jan 28 11:44:32 crc kubenswrapper[4804]: E0128 11:44:32.909850 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-api" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.909857 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-api" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.910067 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-log" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.910116 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-api" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.911347 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.913457 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.916838 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.919856 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.936082 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9716bec9-6c7d-49e3-8c79-ba4c723d8be9" path="/var/lib/kubelet/pods/9716bec9-6c7d-49e3-8c79-ba4c723d8be9/volumes" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.936768 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" path="/var/lib/kubelet/pods/fc89087f-fead-4af8-b13c-67af8c77e7f7/volumes" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.939084 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.045933 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.045988 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-logs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.046127 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.046207 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.046429 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-config-data\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.046531 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jb8\" (UniqueName: \"kubernetes.io/projected/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-kube-api-access-d6jb8\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.148986 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.149026 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-logs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.149050 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.149069 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.149138 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-config-data\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.149181 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6jb8\" (UniqueName: \"kubernetes.io/projected/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-kube-api-access-d6jb8\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.149464 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-logs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.155751 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.156453 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.160538 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.164626 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-config-data\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.165501 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6jb8\" (UniqueName: \"kubernetes.io/projected/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-kube-api-access-d6jb8\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.233630 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.824462 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.859677 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650","Type":"ContainerStarted","Data":"b308f4806837516327e93a19a8f6375deeb1fd9edc0b6c41208476dcc8be7a1b"} Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.861716 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerStarted","Data":"e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4"} Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.147152 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.169359 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.872034 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650","Type":"ContainerStarted","Data":"88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073"} Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.872379 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650","Type":"ContainerStarted","Data":"19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31"} Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.876671 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerStarted","Data":"e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342"} Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.898855 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.898834278 podStartE2EDuration="2.898834278s" podCreationTimestamp="2026-01-28 11:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:34.889088172 +0000 UTC m=+1350.683968176" watchObservedRunningTime="2026-01-28 11:44:34.898834278 +0000 UTC m=+1350.693714272" Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.904435 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.067806 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-29mtd"] Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.069017 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.071670 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.071984 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.078328 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-29mtd"] Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.190228 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-config-data\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.190500 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-scripts\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.190556 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hljr\" (UniqueName: \"kubernetes.io/projected/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-kube-api-access-4hljr\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.190686 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.292110 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-scripts\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.292163 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hljr\" (UniqueName: \"kubernetes.io/projected/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-kube-api-access-4hljr\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.292218 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.292303 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-config-data\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.297785 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.298247 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-config-data\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.298469 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-scripts\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.309017 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hljr\" (UniqueName: \"kubernetes.io/projected/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-kube-api-access-4hljr\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.411379 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.872023 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-29mtd"] Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.336086 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.402775 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-9f892"] Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.403055 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865f5d856f-9f892" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" containerName="dnsmasq-dns" containerID="cri-o://a420e8ae687c2c995a304fcdb8d308a9f54e0ef0b5158ed4df80a9da46192b9f" gracePeriod=10 Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.896434 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerStarted","Data":"4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4"} Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.901294 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-29mtd" event={"ID":"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb","Type":"ContainerStarted","Data":"2c2804f6826c0c8a401ed21f9d0d5b1726c6192dce5dc3765fa6bb65769860e7"} Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.901336 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-29mtd" event={"ID":"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb","Type":"ContainerStarted","Data":"cd4b266430faeba6867917a0825a451df9444fe70269f695949d4c5d992bc8b4"} Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.905006 4804 generic.go:334] "Generic (PLEG): container finished" podID="913fe193-1d5f-4561-9618-fde749a25a1d" containerID="a420e8ae687c2c995a304fcdb8d308a9f54e0ef0b5158ed4df80a9da46192b9f" exitCode=0 Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.905046 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-9f892" event={"ID":"913fe193-1d5f-4561-9618-fde749a25a1d","Type":"ContainerDied","Data":"a420e8ae687c2c995a304fcdb8d308a9f54e0ef0b5158ed4df80a9da46192b9f"} Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.931376 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-29mtd" podStartSLOduration=1.931360303 podStartE2EDuration="1.931360303s" podCreationTimestamp="2026-01-28 11:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:36.926306715 +0000 UTC m=+1352.721186699" watchObservedRunningTime="2026-01-28 11:44:36.931360303 +0000 UTC m=+1352.726240287" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.045803 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.143723 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-config\") pod \"913fe193-1d5f-4561-9618-fde749a25a1d\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.143783 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-sb\") pod \"913fe193-1d5f-4561-9618-fde749a25a1d\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.143867 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7t2v\" (UniqueName: \"kubernetes.io/projected/913fe193-1d5f-4561-9618-fde749a25a1d-kube-api-access-l7t2v\") pod \"913fe193-1d5f-4561-9618-fde749a25a1d\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.143930 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-nb\") pod \"913fe193-1d5f-4561-9618-fde749a25a1d\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.143975 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-svc\") pod \"913fe193-1d5f-4561-9618-fde749a25a1d\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.144015 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-swift-storage-0\") pod \"913fe193-1d5f-4561-9618-fde749a25a1d\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.163265 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/913fe193-1d5f-4561-9618-fde749a25a1d-kube-api-access-l7t2v" (OuterVolumeSpecName: "kube-api-access-l7t2v") pod "913fe193-1d5f-4561-9618-fde749a25a1d" (UID: "913fe193-1d5f-4561-9618-fde749a25a1d"). InnerVolumeSpecName "kube-api-access-l7t2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.207336 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "913fe193-1d5f-4561-9618-fde749a25a1d" (UID: "913fe193-1d5f-4561-9618-fde749a25a1d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.208581 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-config" (OuterVolumeSpecName: "config") pod "913fe193-1d5f-4561-9618-fde749a25a1d" (UID: "913fe193-1d5f-4561-9618-fde749a25a1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.229985 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "913fe193-1d5f-4561-9618-fde749a25a1d" (UID: "913fe193-1d5f-4561-9618-fde749a25a1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.235525 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "913fe193-1d5f-4561-9618-fde749a25a1d" (UID: "913fe193-1d5f-4561-9618-fde749a25a1d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.247463 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.247508 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7t2v\" (UniqueName: \"kubernetes.io/projected/913fe193-1d5f-4561-9618-fde749a25a1d-kube-api-access-l7t2v\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.247525 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.247537 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.247548 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.251144 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "913fe193-1d5f-4561-9618-fde749a25a1d" (UID: "913fe193-1d5f-4561-9618-fde749a25a1d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.349871 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.917335 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-9f892" event={"ID":"913fe193-1d5f-4561-9618-fde749a25a1d","Type":"ContainerDied","Data":"2cfff1780e426c4b862ba06c4d5d217da66ed95a6d5a815238b1e3776a2afeb4"} Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.917421 4804 scope.go:117] "RemoveContainer" containerID="a420e8ae687c2c995a304fcdb8d308a9f54e0ef0b5158ed4df80a9da46192b9f" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.917368 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.950700 4804 scope.go:117] "RemoveContainer" containerID="5b66ffd3825053b82c96be643a5c4b3e14230fd04a94235eb6e84c88e45a3ddc" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.971502 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-9f892"] Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.982425 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-9f892"] Jan 28 11:44:38 crc kubenswrapper[4804]: I0128 11:44:38.930748 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" path="/var/lib/kubelet/pods/913fe193-1d5f-4561-9618-fde749a25a1d/volumes" Jan 28 11:44:39 crc kubenswrapper[4804]: I0128 11:44:39.937725 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerStarted","Data":"1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c"} Jan 28 11:44:39 crc kubenswrapper[4804]: I0128 11:44:39.938020 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:44:39 crc kubenswrapper[4804]: I0128 11:44:39.979581 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.448041488 podStartE2EDuration="8.979552632s" podCreationTimestamp="2026-01-28 11:44:31 +0000 UTC" firstStartedPulling="2026-01-28 11:44:32.738217431 +0000 UTC m=+1348.533097415" lastFinishedPulling="2026-01-28 11:44:39.269728575 +0000 UTC m=+1355.064608559" observedRunningTime="2026-01-28 11:44:39.965559854 +0000 UTC m=+1355.760439838" watchObservedRunningTime="2026-01-28 11:44:39.979552632 +0000 UTC m=+1355.774432616" Jan 28 11:44:41 crc kubenswrapper[4804]: I0128 11:44:41.958068 4804 generic.go:334] "Generic (PLEG): container finished" podID="cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" containerID="2c2804f6826c0c8a401ed21f9d0d5b1726c6192dce5dc3765fa6bb65769860e7" exitCode=0 Jan 28 11:44:41 crc kubenswrapper[4804]: I0128 11:44:41.958309 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-29mtd" event={"ID":"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb","Type":"ContainerDied","Data":"2c2804f6826c0c8a401ed21f9d0d5b1726c6192dce5dc3765fa6bb65769860e7"} Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.582374 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.582427 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.582469 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.583226 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4bffdd4d5a4ad0d46a47b95458a7c8bdaf05a4c4019b6b412dce10eb63d37e95"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.583293 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://4bffdd4d5a4ad0d46a47b95458a7c8bdaf05a4c4019b6b412dce10eb63d37e95" gracePeriod=600 Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.972127 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="4bffdd4d5a4ad0d46a47b95458a7c8bdaf05a4c4019b6b412dce10eb63d37e95" exitCode=0 Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.972195 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"4bffdd4d5a4ad0d46a47b95458a7c8bdaf05a4c4019b6b412dce10eb63d37e95"} Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.972607 4804 scope.go:117] "RemoveContainer" containerID="ed6af6b086af0e36078ceaad545a02650a81d6b24e2afd021938bf20fba0d1ad" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.235562 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.235618 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.340349 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.369377 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-config-data\") pod \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.369422 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-scripts\") pod \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.369541 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-combined-ca-bundle\") pod \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.369636 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hljr\" (UniqueName: \"kubernetes.io/projected/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-kube-api-access-4hljr\") pod \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.376964 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-scripts" (OuterVolumeSpecName: "scripts") pod "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" (UID: "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.381960 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-kube-api-access-4hljr" (OuterVolumeSpecName: "kube-api-access-4hljr") pod "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" (UID: "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb"). InnerVolumeSpecName "kube-api-access-4hljr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.402415 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" (UID: "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.405555 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-config-data" (OuterVolumeSpecName: "config-data") pod "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" (UID: "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.471729 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hljr\" (UniqueName: \"kubernetes.io/projected/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-kube-api-access-4hljr\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.472034 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.472102 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.472163 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.986539 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-29mtd" event={"ID":"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb","Type":"ContainerDied","Data":"cd4b266430faeba6867917a0825a451df9444fe70269f695949d4c5d992bc8b4"} Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.986586 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd4b266430faeba6867917a0825a451df9444fe70269f695949d4c5d992bc8b4" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.986647 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.995429 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d"} Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.218059 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.218623 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-log" containerID="cri-o://19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31" gracePeriod=30 Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.218755 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-api" containerID="cri-o://88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073" gracePeriod=30 Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.230118 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": EOF" Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.231302 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.231508 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" containerName="nova-scheduler-scheduler" containerID="cri-o://79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1" gracePeriod=30 Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.237647 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.304049 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.311251 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-log" containerID="cri-o://ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f" gracePeriod=30 Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.311958 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-metadata" containerID="cri-o://2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a" gracePeriod=30 Jan 28 11:44:45 crc kubenswrapper[4804]: I0128 11:44:45.017800 4804 generic.go:334] "Generic (PLEG): container finished" podID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerID="19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31" exitCode=143 Jan 28 11:44:45 crc kubenswrapper[4804]: I0128 11:44:45.017916 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650","Type":"ContainerDied","Data":"19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31"} Jan 28 11:44:45 crc kubenswrapper[4804]: I0128 11:44:45.021366 4804 generic.go:334] "Generic (PLEG): container finished" podID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerID="ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f" exitCode=143 Jan 28 11:44:45 crc kubenswrapper[4804]: I0128 11:44:45.021465 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3","Type":"ContainerDied","Data":"ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f"} Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.473434 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:51214->10.217.0.196:8775: read: connection reset by peer" Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.473541 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:51198->10.217.0.196:8775: read: connection reset by peer" Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.927569 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.957248 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-logs\") pod \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.957367 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-combined-ca-bundle\") pod \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.957636 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-config-data\") pod \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.957667 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcd72\" (UniqueName: \"kubernetes.io/projected/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-kube-api-access-zcd72\") pod \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.957810 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-nova-metadata-tls-certs\") pod \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.958331 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-logs" (OuterVolumeSpecName: "logs") pod "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" (UID: "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.958732 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.968462 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-kube-api-access-zcd72" (OuterVolumeSpecName: "kube-api-access-zcd72") pod "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" (UID: "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3"). InnerVolumeSpecName "kube-api-access-zcd72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.042136 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-config-data" (OuterVolumeSpecName: "config-data") pod "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" (UID: "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.039554 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" (UID: "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.082301 4804 generic.go:334] "Generic (PLEG): container finished" podID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerID="2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a" exitCode=0 Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.082390 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" (UID: "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.082528 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.084260 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3","Type":"ContainerDied","Data":"2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a"} Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.084314 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3","Type":"ContainerDied","Data":"59b87bf7f61b635191f6dbdc4606dc428f7e447869edcc7c8d84ce1e273ec312"} Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.084347 4804 scope.go:117] "RemoveContainer" containerID="2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.093175 4804 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.093241 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.093258 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.093281 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcd72\" (UniqueName: \"kubernetes.io/projected/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-kube-api-access-zcd72\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.127908 4804 scope.go:117] "RemoveContainer" containerID="ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.156518 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.176408 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.178065 4804 scope.go:117] "RemoveContainer" containerID="2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a" Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.180371 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a\": container with ID starting with 2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a not found: ID does not exist" containerID="2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.180422 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a"} err="failed to get container status \"2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a\": rpc error: code = NotFound desc = could not find container \"2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a\": container with ID starting with 2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a not found: ID does not exist" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.180450 4804 scope.go:117] "RemoveContainer" containerID="ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f" Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.186039 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f\": container with ID starting with ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f not found: ID does not exist" containerID="ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.186107 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f"} err="failed to get container status \"ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f\": rpc error: code = NotFound desc = could not find container \"ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f\": container with ID starting with ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f not found: ID does not exist" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.194535 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.195280 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" containerName="nova-manage" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195311 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" containerName="nova-manage" Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.195339 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-log" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195349 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-log" Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.195369 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" containerName="dnsmasq-dns" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195378 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" containerName="dnsmasq-dns" Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.195422 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" containerName="init" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195432 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" containerName="init" Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.195467 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-metadata" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195473 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-metadata" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195735 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-metadata" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195752 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" containerName="dnsmasq-dns" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195769 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" containerName="nova-manage" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195781 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-log" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.197765 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.201813 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.202222 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.217712 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.298128 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.298223 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67wq4\" (UniqueName: \"kubernetes.io/projected/b0bfaf6b-2c74-4812-965a-4db80f0c4527-kube-api-access-67wq4\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.298326 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.298611 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0bfaf6b-2c74-4812-965a-4db80f0c4527-logs\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.298850 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-config-data\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.401705 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.403221 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0bfaf6b-2c74-4812-965a-4db80f0c4527-logs\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.403433 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-config-data\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.403633 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.403747 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67wq4\" (UniqueName: \"kubernetes.io/projected/b0bfaf6b-2c74-4812-965a-4db80f0c4527-kube-api-access-67wq4\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.405067 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0bfaf6b-2c74-4812-965a-4db80f0c4527-logs\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.407648 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.409404 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-config-data\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.419973 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.426812 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67wq4\" (UniqueName: \"kubernetes.io/projected/b0bfaf6b-2c74-4812-965a-4db80f0c4527-kube-api-access-67wq4\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.509186 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.528026 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.608616 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-combined-ca-bundle\") pod \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.608716 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rphbz\" (UniqueName: \"kubernetes.io/projected/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-kube-api-access-rphbz\") pod \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.608809 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-config-data\") pod \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.613953 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-kube-api-access-rphbz" (OuterVolumeSpecName: "kube-api-access-rphbz") pod "99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" (UID: "99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711"). InnerVolumeSpecName "kube-api-access-rphbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.634413 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" (UID: "99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.645337 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-config-data" (OuterVolumeSpecName: "config-data") pod "99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" (UID: "99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.715736 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.715789 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rphbz\" (UniqueName: \"kubernetes.io/projected/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-kube-api-access-rphbz\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.715806 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.938241 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" path="/var/lib/kubelet/pods/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3/volumes" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.024571 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:49 crc kubenswrapper[4804]: W0128 11:44:49.031944 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0bfaf6b_2c74_4812_965a_4db80f0c4527.slice/crio-dde0b061f1847f788c0ad04e0fb5557d71997e0c0bf63a89d091b0cead6b787e WatchSource:0}: Error finding container dde0b061f1847f788c0ad04e0fb5557d71997e0c0bf63a89d091b0cead6b787e: Status 404 returned error can't find the container with id dde0b061f1847f788c0ad04e0fb5557d71997e0c0bf63a89d091b0cead6b787e Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.091780 4804 generic.go:334] "Generic (PLEG): container finished" podID="99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" containerID="79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1" exitCode=0 Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.091843 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.091972 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711","Type":"ContainerDied","Data":"79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1"} Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.092018 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711","Type":"ContainerDied","Data":"f3e4f04cf9239f80de0615a7c82510a14c24dd1f74e0a4378ba5cd12abed76cf"} Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.092042 4804 scope.go:117] "RemoveContainer" containerID="79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.095746 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0bfaf6b-2c74-4812-965a-4db80f0c4527","Type":"ContainerStarted","Data":"dde0b061f1847f788c0ad04e0fb5557d71997e0c0bf63a89d091b0cead6b787e"} Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.126875 4804 scope.go:117] "RemoveContainer" containerID="79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1" Jan 28 11:44:49 crc kubenswrapper[4804]: E0128 11:44:49.127470 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1\": container with ID starting with 79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1 not found: ID does not exist" containerID="79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.127707 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1"} err="failed to get container status \"79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1\": rpc error: code = NotFound desc = could not find container \"79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1\": container with ID starting with 79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1 not found: ID does not exist" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.159567 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.175580 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.184831 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:49 crc kubenswrapper[4804]: E0128 11:44:49.185218 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" containerName="nova-scheduler-scheduler" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.185235 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" containerName="nova-scheduler-scheduler" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.185418 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" containerName="nova-scheduler-scheduler" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.186036 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.188868 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.199187 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.224305 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.224351 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqdj4\" (UniqueName: \"kubernetes.io/projected/469a0049-480f-4cde-848d-4b11cb54130b-kube-api-access-xqdj4\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.224395 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-config-data\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.327625 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.327688 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqdj4\" (UniqueName: \"kubernetes.io/projected/469a0049-480f-4cde-848d-4b11cb54130b-kube-api-access-xqdj4\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.327745 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-config-data\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.333931 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-config-data\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.333984 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.350539 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqdj4\" (UniqueName: \"kubernetes.io/projected/469a0049-480f-4cde-848d-4b11cb54130b-kube-api-access-xqdj4\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.509553 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.977514 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.073452 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.113211 4804 generic.go:334] "Generic (PLEG): container finished" podID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerID="88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073" exitCode=0 Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.113312 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650","Type":"ContainerDied","Data":"88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073"} Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.113596 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650","Type":"ContainerDied","Data":"b308f4806837516327e93a19a8f6375deeb1fd9edc0b6c41208476dcc8be7a1b"} Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.113615 4804 scope.go:117] "RemoveContainer" containerID="88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.113322 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.116519 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"469a0049-480f-4cde-848d-4b11cb54130b","Type":"ContainerStarted","Data":"0f20d09f4e22850dccdafc066e7822cd90278816628e2fe4c307f19e6234a0ef"} Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.126502 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0bfaf6b-2c74-4812-965a-4db80f0c4527","Type":"ContainerStarted","Data":"f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca"} Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.126552 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0bfaf6b-2c74-4812-965a-4db80f0c4527","Type":"ContainerStarted","Data":"5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06"} Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.142236 4804 scope.go:117] "RemoveContainer" containerID="19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.143204 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-config-data\") pod \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.143266 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-public-tls-certs\") pod \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.143639 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-logs\") pod \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.143763 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-combined-ca-bundle\") pod \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.143800 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6jb8\" (UniqueName: \"kubernetes.io/projected/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-kube-api-access-d6jb8\") pod \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.143928 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-internal-tls-certs\") pod \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.144331 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-logs" (OuterVolumeSpecName: "logs") pod "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" (UID: "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.146695 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.158809 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.158785826 podStartE2EDuration="2.158785826s" podCreationTimestamp="2026-01-28 11:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:50.156401972 +0000 UTC m=+1365.951281956" watchObservedRunningTime="2026-01-28 11:44:50.158785826 +0000 UTC m=+1365.953665810" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.164126 4804 scope.go:117] "RemoveContainer" containerID="88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073" Jan 28 11:44:50 crc kubenswrapper[4804]: E0128 11:44:50.164468 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073\": container with ID starting with 88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073 not found: ID does not exist" containerID="88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.164568 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073"} err="failed to get container status \"88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073\": rpc error: code = NotFound desc = could not find container \"88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073\": container with ID starting with 88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073 not found: ID does not exist" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.164644 4804 scope.go:117] "RemoveContainer" containerID="19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31" Jan 28 11:44:50 crc kubenswrapper[4804]: E0128 11:44:50.165101 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31\": container with ID starting with 19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31 not found: ID does not exist" containerID="19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.165124 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31"} err="failed to get container status \"19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31\": rpc error: code = NotFound desc = could not find container \"19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31\": container with ID starting with 19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31 not found: ID does not exist" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.170346 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-kube-api-access-d6jb8" (OuterVolumeSpecName: "kube-api-access-d6jb8") pod "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" (UID: "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650"). InnerVolumeSpecName "kube-api-access-d6jb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.199949 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" (UID: "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.202576 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-config-data" (OuterVolumeSpecName: "config-data") pod "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" (UID: "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.222515 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" (UID: "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.224438 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" (UID: "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.249405 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.249483 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6jb8\" (UniqueName: \"kubernetes.io/projected/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-kube-api-access-d6jb8\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.249514 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.249644 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.249657 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.471747 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.495362 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.514839 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:50 crc kubenswrapper[4804]: E0128 11:44:50.515509 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-api" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.515533 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-api" Jan 28 11:44:50 crc kubenswrapper[4804]: E0128 11:44:50.515556 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-log" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.515565 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-log" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.515782 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-api" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.515811 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-log" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.517232 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.521578 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.521614 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.522443 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.526340 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.556695 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-config-data\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.556976 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.557016 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0fb199-797a-40c6-8c71-3b5a976b6c61-logs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.557063 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.557202 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qm8l\" (UniqueName: \"kubernetes.io/projected/ae0fb199-797a-40c6-8c71-3b5a976b6c61-kube-api-access-4qm8l\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.557287 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.658638 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.658710 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0fb199-797a-40c6-8c71-3b5a976b6c61-logs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.658743 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.658821 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qm8l\" (UniqueName: \"kubernetes.io/projected/ae0fb199-797a-40c6-8c71-3b5a976b6c61-kube-api-access-4qm8l\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.658911 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.658956 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-config-data\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.659739 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0fb199-797a-40c6-8c71-3b5a976b6c61-logs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.662932 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.663004 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-config-data\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.664179 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.666177 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.684652 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qm8l\" (UniqueName: \"kubernetes.io/projected/ae0fb199-797a-40c6-8c71-3b5a976b6c61-kube-api-access-4qm8l\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.901631 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.943495 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" path="/var/lib/kubelet/pods/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711/volumes" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.944480 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" path="/var/lib/kubelet/pods/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650/volumes" Jan 28 11:44:51 crc kubenswrapper[4804]: I0128 11:44:51.145689 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"469a0049-480f-4cde-848d-4b11cb54130b","Type":"ContainerStarted","Data":"df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a"} Jan 28 11:44:51 crc kubenswrapper[4804]: I0128 11:44:51.173019 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.172998193 podStartE2EDuration="2.172998193s" podCreationTimestamp="2026-01-28 11:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:51.162192836 +0000 UTC m=+1366.957072840" watchObservedRunningTime="2026-01-28 11:44:51.172998193 +0000 UTC m=+1366.967878187" Jan 28 11:44:51 crc kubenswrapper[4804]: I0128 11:44:51.406019 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:51 crc kubenswrapper[4804]: W0128 11:44:51.415379 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae0fb199_797a_40c6_8c71_3b5a976b6c61.slice/crio-b68d102d0c8eb5fad0a64accd8eecc6866ce24c76046948f3cee122445962edf WatchSource:0}: Error finding container b68d102d0c8eb5fad0a64accd8eecc6866ce24c76046948f3cee122445962edf: Status 404 returned error can't find the container with id b68d102d0c8eb5fad0a64accd8eecc6866ce24c76046948f3cee122445962edf Jan 28 11:44:52 crc kubenswrapper[4804]: I0128 11:44:52.160819 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae0fb199-797a-40c6-8c71-3b5a976b6c61","Type":"ContainerStarted","Data":"5fce3701f770e3f1d822ae3950ad420da6c9b44d0df68cf4bd4c8ebf86d62649"} Jan 28 11:44:52 crc kubenswrapper[4804]: I0128 11:44:52.161409 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae0fb199-797a-40c6-8c71-3b5a976b6c61","Type":"ContainerStarted","Data":"61e2afaf3a01a165673d5c42d38ef739fe857b1c1e03f62547614151ae809226"} Jan 28 11:44:52 crc kubenswrapper[4804]: I0128 11:44:52.161433 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae0fb199-797a-40c6-8c71-3b5a976b6c61","Type":"ContainerStarted","Data":"b68d102d0c8eb5fad0a64accd8eecc6866ce24c76046948f3cee122445962edf"} Jan 28 11:44:52 crc kubenswrapper[4804]: I0128 11:44:52.202434 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.202403803 podStartE2EDuration="2.202403803s" podCreationTimestamp="2026-01-28 11:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:52.185154675 +0000 UTC m=+1367.980034659" watchObservedRunningTime="2026-01-28 11:44:52.202403803 +0000 UTC m=+1367.997283787" Jan 28 11:44:53 crc kubenswrapper[4804]: I0128 11:44:53.529313 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:44:53 crc kubenswrapper[4804]: I0128 11:44:53.530657 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:44:54 crc kubenswrapper[4804]: I0128 11:44:54.510864 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 11:44:58 crc kubenswrapper[4804]: I0128 11:44:58.528483 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 11:44:58 crc kubenswrapper[4804]: I0128 11:44:58.529056 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 11:44:59 crc kubenswrapper[4804]: I0128 11:44:59.510836 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 11:44:59 crc kubenswrapper[4804]: I0128 11:44:59.540118 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 11:44:59 crc kubenswrapper[4804]: I0128 11:44:59.543206 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 11:44:59 crc kubenswrapper[4804]: I0128 11:44:59.543241 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.163327 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr"] Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.165303 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.167514 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.167666 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.173531 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr"] Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.183991 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zstbn\" (UniqueName: \"kubernetes.io/projected/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-kube-api-access-zstbn\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.184082 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-config-volume\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.184385 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-secret-volume\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.257531 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.286727 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-secret-volume\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.287144 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zstbn\" (UniqueName: \"kubernetes.io/projected/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-kube-api-access-zstbn\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.287274 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-config-volume\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.288153 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-config-volume\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.295017 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-secret-volume\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.310137 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zstbn\" (UniqueName: \"kubernetes.io/projected/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-kube-api-access-zstbn\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.488941 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:01 crc kubenswrapper[4804]: I0128 11:45:00.902473 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:45:01 crc kubenswrapper[4804]: I0128 11:45:00.902796 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:45:01 crc kubenswrapper[4804]: I0128 11:45:01.767650 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr"] Jan 28 11:45:01 crc kubenswrapper[4804]: I0128 11:45:01.929063 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 11:45:01 crc kubenswrapper[4804]: I0128 11:45:01.929076 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 11:45:02 crc kubenswrapper[4804]: I0128 11:45:02.255661 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" event={"ID":"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc","Type":"ContainerStarted","Data":"ec4494c033a2934fc01293e9dd81cb1af39c7d20a6e53ef7ee0ed4ef65497625"} Jan 28 11:45:02 crc kubenswrapper[4804]: I0128 11:45:02.256046 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" event={"ID":"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc","Type":"ContainerStarted","Data":"85ff122a1d329198ef775f9a9af46551b86e618d0b7980ba248ca6736acc1112"} Jan 28 11:45:02 crc kubenswrapper[4804]: I0128 11:45:02.258665 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 11:45:02 crc kubenswrapper[4804]: I0128 11:45:02.283754 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" podStartSLOduration=2.283732974 podStartE2EDuration="2.283732974s" podCreationTimestamp="2026-01-28 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:45:02.277141348 +0000 UTC m=+1378.072021332" watchObservedRunningTime="2026-01-28 11:45:02.283732974 +0000 UTC m=+1378.078612958" Jan 28 11:45:03 crc kubenswrapper[4804]: I0128 11:45:03.265588 4804 generic.go:334] "Generic (PLEG): container finished" podID="deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" containerID="ec4494c033a2934fc01293e9dd81cb1af39c7d20a6e53ef7ee0ed4ef65497625" exitCode=0 Jan 28 11:45:03 crc kubenswrapper[4804]: I0128 11:45:03.265639 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" event={"ID":"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc","Type":"ContainerDied","Data":"ec4494c033a2934fc01293e9dd81cb1af39c7d20a6e53ef7ee0ed4ef65497625"} Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.642082 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.774527 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-secret-volume\") pod \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.774629 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-config-volume\") pod \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.774764 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zstbn\" (UniqueName: \"kubernetes.io/projected/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-kube-api-access-zstbn\") pod \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.779922 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-config-volume" (OuterVolumeSpecName: "config-volume") pod "deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" (UID: "deda2a52-b6b6-4b65-87d2-26a7ca06a7dc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.781996 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-kube-api-access-zstbn" (OuterVolumeSpecName: "kube-api-access-zstbn") pod "deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" (UID: "deda2a52-b6b6-4b65-87d2-26a7ca06a7dc"). InnerVolumeSpecName "kube-api-access-zstbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.782906 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" (UID: "deda2a52-b6b6-4b65-87d2-26a7ca06a7dc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.876522 4804 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.876556 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.876566 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zstbn\" (UniqueName: \"kubernetes.io/projected/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-kube-api-access-zstbn\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:05 crc kubenswrapper[4804]: I0128 11:45:05.285409 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" event={"ID":"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc","Type":"ContainerDied","Data":"85ff122a1d329198ef775f9a9af46551b86e618d0b7980ba248ca6736acc1112"} Jan 28 11:45:05 crc kubenswrapper[4804]: I0128 11:45:05.285465 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85ff122a1d329198ef775f9a9af46551b86e618d0b7980ba248ca6736acc1112" Jan 28 11:45:05 crc kubenswrapper[4804]: I0128 11:45:05.285533 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:08 crc kubenswrapper[4804]: I0128 11:45:08.549198 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 11:45:08 crc kubenswrapper[4804]: I0128 11:45:08.550829 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 11:45:08 crc kubenswrapper[4804]: I0128 11:45:08.560258 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 11:45:09 crc kubenswrapper[4804]: I0128 11:45:09.345151 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 11:45:10 crc kubenswrapper[4804]: I0128 11:45:10.910620 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 11:45:10 crc kubenswrapper[4804]: I0128 11:45:10.910726 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 11:45:10 crc kubenswrapper[4804]: I0128 11:45:10.911129 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 11:45:10 crc kubenswrapper[4804]: I0128 11:45:10.911176 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 11:45:10 crc kubenswrapper[4804]: I0128 11:45:10.926214 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 11:45:10 crc kubenswrapper[4804]: I0128 11:45:10.926277 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.393600 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.394449 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="eaba1c3c-49d4-498e-94b8-9c8cbe8660da" containerName="openstackclient" containerID="cri-o://0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67" gracePeriod=2 Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.406310 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.580692 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.580937 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="cinder-scheduler" containerID="cri-o://005c93d53e10abe220c87f4440097a40fcd2ee8a29f58966418aa864a302e6f7" gracePeriod=30 Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.581475 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="probe" containerID="cri-o://c1345bf2b60adbef9b806636ee3887a5869fd85c14cb9679c394104f26a95a2c" gracePeriod=30 Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.627531 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8522-account-create-update-8fq2p"] Jan 28 11:45:30 crc kubenswrapper[4804]: E0128 11:45:30.628026 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" containerName="collect-profiles" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.628039 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" containerName="collect-profiles" Jan 28 11:45:30 crc kubenswrapper[4804]: E0128 11:45:30.628064 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaba1c3c-49d4-498e-94b8-9c8cbe8660da" containerName="openstackclient" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.628070 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaba1c3c-49d4-498e-94b8-9c8cbe8660da" containerName="openstackclient" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.628247 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" containerName="collect-profiles" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.628274 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaba1c3c-49d4-498e-94b8-9c8cbe8660da" containerName="openstackclient" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.630018 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.637959 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.650054 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8522-account-create-update-8fq2p"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.694870 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jqk9s"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.696251 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.705032 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.740354 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jqk9s"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.741075 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69bqc\" (UniqueName: \"kubernetes.io/projected/12271c96-a234-46d8-bc32-80db78339116-kube-api-access-69bqc\") pod \"barbican-8522-account-create-update-8fq2p\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.741217 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12271c96-a234-46d8-bc32-80db78339116-operator-scripts\") pod \"barbican-8522-account-create-update-8fq2p\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.824540 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8522-account-create-update-rlttq"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.842763 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-operator-scripts\") pod \"root-account-create-update-jqk9s\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.842844 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12271c96-a234-46d8-bc32-80db78339116-operator-scripts\") pod \"barbican-8522-account-create-update-8fq2p\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.842925 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69bqc\" (UniqueName: \"kubernetes.io/projected/12271c96-a234-46d8-bc32-80db78339116-kube-api-access-69bqc\") pod \"barbican-8522-account-create-update-8fq2p\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.842957 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pcnf\" (UniqueName: \"kubernetes.io/projected/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-kube-api-access-7pcnf\") pod \"root-account-create-update-jqk9s\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.843730 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12271c96-a234-46d8-bc32-80db78339116-operator-scripts\") pod \"barbican-8522-account-create-update-8fq2p\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.847252 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-w544f"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.871477 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.871781 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api-log" containerID="cri-o://b7a5a299ea638aff1b67f737be31752dbc58e62ca2663d443117c669ac5e859a" gracePeriod=30 Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.872249 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api" containerID="cri-o://7b625bd5e08a3fff2579118cb1bfb0f02e6d6eda2e30f536589d6d7b53d87774" gracePeriod=30 Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.895975 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.946520 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pcnf\" (UniqueName: \"kubernetes.io/projected/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-kube-api-access-7pcnf\") pod \"root-account-create-update-jqk9s\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.946622 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-operator-scripts\") pod \"root-account-create-update-jqk9s\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.946751 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69bqc\" (UniqueName: \"kubernetes.io/projected/12271c96-a234-46d8-bc32-80db78339116-kube-api-access-69bqc\") pod \"barbican-8522-account-create-update-8fq2p\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.947348 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-operator-scripts\") pod \"root-account-create-update-jqk9s\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.956590 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8522-account-create-update-rlttq"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.956850 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xtdr8"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.962830 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-w544f"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.975865 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-gtg97"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.976155 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-gtg97" podUID="f7359aec-58b3-4254-8765-cdc131e5f912" containerName="openstack-network-exporter" containerID="cri-o://565156fe636372aa88e628b080b158f58c0e89d805ea93ee8a1f9e78b61b800b" gracePeriod=30 Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.995197 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.014007 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-pfzkj"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.036814 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pcnf\" (UniqueName: \"kubernetes.io/projected/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-kube-api-access-7pcnf\") pod \"root-account-create-update-jqk9s\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.036897 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-753f-account-create-update-2x2r6"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.039343 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:31 crc kubenswrapper[4804]: E0128 11:45:31.052426 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:31 crc kubenswrapper[4804]: E0128 11:45:31.052479 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data podName:76d127f1-97d9-4552-9bdb-b3482a45951d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:31.552464585 +0000 UTC m=+1407.347344569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data") pod "rabbitmq-cell1-server-0" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d") : configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.093816 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-753f-account-create-update-2x2r6"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.129773 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ec8f-account-create-update-wm9f2"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.149830 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ec8f-account-create-update-wm9f2"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.250757 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.337161 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-wch49"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.390394 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-j6x65"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.422028 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.422900 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="ovn-northd" containerID="cri-o://1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00" gracePeriod=30 Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.423225 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="openstack-network-exporter" containerID="cri-o://17400e5f10254b0d771acc135458ad1381f04acdf3cc5817d31b6d3932b519f1" gracePeriod=30 Jan 28 11:45:31 crc kubenswrapper[4804]: E0128 11:45:31.471379 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 11:45:31 crc kubenswrapper[4804]: E0128 11:45:31.471448 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data podName:f7c5c969-c4c2-4f76-b3c6-152473159e78 nodeName:}" failed. No retries permitted until 2026-01-28 11:45:31.971428865 +0000 UTC m=+1407.766308949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data") pod "rabbitmq-server-0" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78") : configmap "rabbitmq-config-data" not found Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.495919 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-wch49"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.525947 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-j6x65"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.561998 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0c6f-account-create-update-hhm9c"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.563673 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.570467 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 28 11:45:31 crc kubenswrapper[4804]: E0128 11:45:31.572541 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:31 crc kubenswrapper[4804]: E0128 11:45:31.572627 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data podName:76d127f1-97d9-4552-9bdb-b3482a45951d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:32.572607889 +0000 UTC m=+1408.367487873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data") pod "rabbitmq-cell1-server-0" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d") : configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.587827 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-hhm9c"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.635431 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2swjk"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.676391 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kqkz\" (UniqueName: \"kubernetes.io/projected/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-kube-api-access-2kqkz\") pod \"nova-api-0c6f-account-create-update-hhm9c\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.676476 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-operator-scripts\") pod \"nova-api-0c6f-account-create-update-hhm9c\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.734082 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a291-account-create-update-dlt8t"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.796330 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kqkz\" (UniqueName: \"kubernetes.io/projected/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-kube-api-access-2kqkz\") pod \"nova-api-0c6f-account-create-update-hhm9c\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.796671 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-operator-scripts\") pod \"nova-api-0c6f-account-create-update-hhm9c\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.801191 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-operator-scripts\") pod \"nova-api-0c6f-account-create-update-hhm9c\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.897210 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kqkz\" (UniqueName: \"kubernetes.io/projected/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-kube-api-access-2kqkz\") pod \"nova-api-0c6f-account-create-update-hhm9c\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.927145 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.927982 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="openstack-network-exporter" containerID="cri-o://083be3913b9cea293776996ed70c579f5b987734d7d6618ce37907eb76d96885" gracePeriod=300 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.022469 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:32 crc kubenswrapper[4804]: E0128 11:45:32.030267 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 11:45:32 crc kubenswrapper[4804]: E0128 11:45:32.035251 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data podName:f7c5c969-c4c2-4f76-b3c6-152473159e78 nodeName:}" failed. No retries permitted until 2026-01-28 11:45:33.035209871 +0000 UTC m=+1408.830089855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data") pod "rabbitmq-server-0" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78") : configmap "rabbitmq-config-data" not found Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.049265 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2swjk"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.124365 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a291-account-create-update-dlt8t"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.155770 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2c81-account-create-update-ldfns"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.219971 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2c81-account-create-update-ldfns"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.265818 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bnpvd"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.296162 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bnpvd"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.339940 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.340425 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-server" containerID="cri-o://c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.340829 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="swift-recon-cron" containerID="cri-o://3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.340875 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="rsync" containerID="cri-o://a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.340997 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-expirer" containerID="cri-o://43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341032 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-updater" containerID="cri-o://02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341064 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-auditor" containerID="cri-o://88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341092 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-replicator" containerID="cri-o://f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341120 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-server" containerID="cri-o://5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341155 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-updater" containerID="cri-o://e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341184 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-auditor" containerID="cri-o://ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341212 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-replicator" containerID="cri-o://fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341240 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-server" containerID="cri-o://a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341280 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-reaper" containerID="cri-o://a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341309 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-auditor" containerID="cri-o://2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341340 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-replicator" containerID="cri-o://1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.432040 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.432668 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="openstack-network-exporter" containerID="cri-o://7501d75daa32f7ac9da494ff4510c6c7b84e72c6cd5d7a36b873ba97e31ca357" gracePeriod=300 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.477290 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="ovsdbserver-nb" containerID="cri-o://445cd2aea23cf7159b1e5bbce268d62cd1c9a1d5072f21d98a9181a420bf2e56" gracePeriod=300 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.500775 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-9brzz"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.557138 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-9brzz"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.626206 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-jxgc9"] Jan 28 11:45:32 crc kubenswrapper[4804]: E0128 11:45:32.666198 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:32 crc kubenswrapper[4804]: E0128 11:45:32.666259 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data podName:76d127f1-97d9-4552-9bdb-b3482a45951d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:34.666245442 +0000 UTC m=+1410.461125416 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data") pod "rabbitmq-cell1-server-0" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d") : configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.683698 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-659f7cffd6-wm9cj"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.683946 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-659f7cffd6-wm9cj" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-log" containerID="cri-o://54143a992b19966c4a0488e9860a42a6b4166527e948b9a61cc651fb19353896" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.684288 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-659f7cffd6-wm9cj" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-api" containerID="cri-o://2bc2f4bef5b6e11721d8eabaa519e6625f7ff953fd015c6be0cebef1e6ec65fa" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.714940 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-j9ld2"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.715478 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerName="dnsmasq-dns" containerID="cri-o://91cc51ff2b7594ba6b7c5b83ef291bdad1767dd300aa27e2d6fe9a547161ad93" gracePeriod=10 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.735939 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="ovsdbserver-sb" containerID="cri-o://1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2" gracePeriod=300 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.760671 4804 generic.go:334] "Generic (PLEG): container finished" podID="edcdd787-6628-49ee-abcf-0146c096f547" containerID="17400e5f10254b0d771acc135458ad1381f04acdf3cc5817d31b6d3932b519f1" exitCode=2 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.760761 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcdd787-6628-49ee-abcf-0146c096f547","Type":"ContainerDied","Data":"17400e5f10254b0d771acc135458ad1381f04acdf3cc5817d31b6d3932b519f1"} Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.763003 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-jxgc9"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.776032 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-29mtd"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.790480 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-29mtd"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.811970 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-b679z"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.818033 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-blnpq"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.821435 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gtg97_f7359aec-58b3-4254-8765-cdc131e5f912/openstack-network-exporter/0.log" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.821479 4804 generic.go:334] "Generic (PLEG): container finished" podID="f7359aec-58b3-4254-8765-cdc131e5f912" containerID="565156fe636372aa88e628b080b158f58c0e89d805ea93ee8a1f9e78b61b800b" exitCode=2 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.821552 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gtg97" event={"ID":"f7359aec-58b3-4254-8765-cdc131e5f912","Type":"ContainerDied","Data":"565156fe636372aa88e628b080b158f58c0e89d805ea93ee8a1f9e78b61b800b"} Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.834001 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jqk9s" event={"ID":"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8","Type":"ContainerStarted","Data":"59c191ec61924ab2b5f8fefd52ae2f9680b75391edc58ed63a1c1c209e71f63c"} Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.836876 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7d4e-account-create-update-hrzrw"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.843123 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-b679z"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.850917 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-blnpq"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.871114 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7d4e-account-create-update-hrzrw"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.874105 4804 generic.go:334] "Generic (PLEG): container finished" podID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerID="b7a5a299ea638aff1b67f737be31752dbc58e62ca2663d443117c669ac5e859a" exitCode=143 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.874160 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04cc886c-66ef-4b91-87cf-1f9fe5de8081","Type":"ContainerDied","Data":"b7a5a299ea638aff1b67f737be31752dbc58e62ca2663d443117c669ac5e859a"} Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.899971 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-jqlrv"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.929416 4804 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/glance-default-internal-api-0" secret="" err="secret \"glance-glance-dockercfg-dv6zq\" not found" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.938311 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12849043-1f8e-4d1f-aae3-9cbc35ea4361" path="/var/lib/kubelet/pods/12849043-1f8e-4d1f-aae3-9cbc35ea4361/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.938928 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2baa2aa0-600d-4728-bb8c-7fee05022658" path="/var/lib/kubelet/pods/2baa2aa0-600d-4728-bb8c-7fee05022658/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.939523 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38148c07-9662-4f0b-8285-a02633a7cd37" path="/var/lib/kubelet/pods/38148c07-9662-4f0b-8285-a02633a7cd37/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.940204 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" path="/var/lib/kubelet/pods/3bd4fedc-8940-48ad-b718-4fbb98e48bf0/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.943959 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57723f90-020a-42b7-ad6c-49e998417f27" path="/var/lib/kubelet/pods/57723f90-020a-42b7-ad6c-49e998417f27/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.944673 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b292a47-f331-472d-941e-193e41fee49f" path="/var/lib/kubelet/pods/6b292a47-f331-472d-941e-193e41fee49f/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.947254 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" path="/var/lib/kubelet/pods/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.947930 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ffbce9-a3f3-4012-861a-fae498510fde" path="/var/lib/kubelet/pods/99ffbce9-a3f3-4012-861a-fae498510fde/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.953354 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" path="/var/lib/kubelet/pods/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.954076 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf79509c-10e0-4ebc-a55d-e46f5497e2fd" path="/var/lib/kubelet/pods/bf79509c-10e0-4ebc-a55d-e46f5497e2fd/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.954689 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb46a04b-0e73-46fb-bcdf-a670c30d5531" path="/var/lib/kubelet/pods/cb46a04b-0e73-46fb-bcdf-a670c30d5531/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.958152 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" path="/var/lib/kubelet/pods/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.958794 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5916f11-436f-46f9-b76e-304aa86f91a1" path="/var/lib/kubelet/pods/d5916f11-436f-46f9-b76e-304aa86f91a1/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.959688 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da587a6a-8109-4c08-8395-f4cd6b078dc7" path="/var/lib/kubelet/pods/da587a6a-8109-4c08-8395-f4cd6b078dc7/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.960249 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e541b2a6-870f-4829-bdfc-ad3e4368ec0b" path="/var/lib/kubelet/pods/e541b2a6-870f-4829-bdfc-ad3e4368ec0b/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.965712 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76909b5-2ed7-476f-8f90-d8c9d168af6d" path="/var/lib/kubelet/pods/f76909b5-2ed7-476f-8f90-d8c9d168af6d/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.966325 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-jqlrv"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.966355 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.966370 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.966578 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-log" containerID="cri-o://49f9909506d8a2c0b51ffca97a1e1ce6efc0b0acde0bbf32f3a77e33e0c7d096" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.967024 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-httpd" containerID="cri-o://ea5cc70522b8b244db30a0a1dd5bc4353ad8899e579dd2e9b1384915ac35e91e" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: W0128 11:45:32.985583 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12271c96_a234_46d8_bc32_80db78339116.slice/crio-892941696eeaa5b4b2629f40cb73dfe018b72b2636b79c4b93c9f6f01ce6184e WatchSource:0}: Error finding container 892941696eeaa5b4b2629f40cb73dfe018b72b2636b79c4b93c9f6f01ce6184e: Status 404 returned error can't find the container with id 892941696eeaa5b4b2629f40cb73dfe018b72b2636b79c4b93c9f6f01ce6184e Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.986576 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb" exitCode=0 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.986705 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb"} Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:32.996706 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2 is running failed: container process not found" containerID="1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:32.997774 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2 is running failed: container process not found" containerID="1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:32.998302 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2 is running failed: container process not found" containerID="1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:32.998334 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="ovsdbserver-sb" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:32.998573 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c6c76352-2487-4098-bbee-579834052292/ovsdbserver-nb/0.log" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:32.998603 4804 generic.go:334] "Generic (PLEG): container finished" podID="c6c76352-2487-4098-bbee-579834052292" containerID="083be3913b9cea293776996ed70c579f5b987734d7d6618ce37907eb76d96885" exitCode=2 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:32.998628 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c6c76352-2487-4098-bbee-579834052292","Type":"ContainerDied","Data":"083be3913b9cea293776996ed70c579f5b987734d7d6618ce37907eb76d96885"} Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.022269 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zvgmg"] Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.032470 4804 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 11:45:33 crc kubenswrapper[4804]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: if [ -n "barbican" ]; then Jan 28 11:45:33 crc kubenswrapper[4804]: GRANT_DATABASE="barbican" Jan 28 11:45:33 crc kubenswrapper[4804]: else Jan 28 11:45:33 crc kubenswrapper[4804]: GRANT_DATABASE="*" Jan 28 11:45:33 crc kubenswrapper[4804]: fi Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: # going for maximum compatibility here: Jan 28 11:45:33 crc kubenswrapper[4804]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 28 11:45:33 crc kubenswrapper[4804]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 28 11:45:33 crc kubenswrapper[4804]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 28 11:45:33 crc kubenswrapper[4804]: # support updates Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: $MYSQL_CMD < logger="UnhandledError" Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.034991 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-8522-account-create-update-8fq2p" podUID="12271c96-a234-46d8-bc32-80db78339116" Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.082876 4804 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.082959 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:33.582945262 +0000 UTC m=+1409.377825246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-scripts" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.084005 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.084081 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data podName:f7c5c969-c4c2-4f76-b3c6-152473159e78 nodeName:}" failed. No retries permitted until 2026-01-28 11:45:35.084063447 +0000 UTC m=+1410.878943431 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data") pod "rabbitmq-server-0" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78") : configmap "rabbitmq-config-data" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.084148 4804 secret.go:188] Couldn't get secret openstack/glance-default-internal-config-data: secret "glance-default-internal-config-data" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.084173 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:33.58416657 +0000 UTC m=+1409.379046554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-default-internal-config-data" not found Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.116021 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zvgmg"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.129931 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.154936 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ea29-account-create-update-fd9sb"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.164590 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jqk9s"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.195212 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ea29-account-create-update-fd9sb"] Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.201054 4804 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 28 11:45:33 crc kubenswrapper[4804]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 28 11:45:33 crc kubenswrapper[4804]: + source /usr/local/bin/container-scripts/functions Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNBridge=br-int Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNRemote=tcp:localhost:6642 Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNEncapType=geneve Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNAvailabilityZones= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ EnableChassisAsGateway=true Jan 28 11:45:33 crc kubenswrapper[4804]: ++ PhysicalNetworks= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNHostName= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 28 11:45:33 crc kubenswrapper[4804]: ++ ovs_dir=/var/lib/openvswitch Jan 28 11:45:33 crc kubenswrapper[4804]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 28 11:45:33 crc kubenswrapper[4804]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 28 11:45:33 crc kubenswrapper[4804]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + cleanup_ovsdb_server_semaphore Jan 28 11:45:33 crc kubenswrapper[4804]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 11:45:33 crc kubenswrapper[4804]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 28 11:45:33 crc kubenswrapper[4804]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-pfzkj" message=< Jan 28 11:45:33 crc kubenswrapper[4804]: Exiting ovsdb-server (5) [ OK ] Jan 28 11:45:33 crc kubenswrapper[4804]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 28 11:45:33 crc kubenswrapper[4804]: + source /usr/local/bin/container-scripts/functions Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNBridge=br-int Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNRemote=tcp:localhost:6642 Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNEncapType=geneve Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNAvailabilityZones= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ EnableChassisAsGateway=true Jan 28 11:45:33 crc kubenswrapper[4804]: ++ PhysicalNetworks= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNHostName= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 28 11:45:33 crc kubenswrapper[4804]: ++ ovs_dir=/var/lib/openvswitch Jan 28 11:45:33 crc kubenswrapper[4804]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 28 11:45:33 crc kubenswrapper[4804]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 28 11:45:33 crc kubenswrapper[4804]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + cleanup_ovsdb_server_semaphore Jan 28 11:45:33 crc kubenswrapper[4804]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 11:45:33 crc kubenswrapper[4804]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 28 11:45:33 crc kubenswrapper[4804]: > Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.201091 4804 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 28 11:45:33 crc kubenswrapper[4804]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 28 11:45:33 crc kubenswrapper[4804]: + source /usr/local/bin/container-scripts/functions Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNBridge=br-int Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNRemote=tcp:localhost:6642 Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNEncapType=geneve Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNAvailabilityZones= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ EnableChassisAsGateway=true Jan 28 11:45:33 crc kubenswrapper[4804]: ++ PhysicalNetworks= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNHostName= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 28 11:45:33 crc kubenswrapper[4804]: ++ ovs_dir=/var/lib/openvswitch Jan 28 11:45:33 crc kubenswrapper[4804]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 28 11:45:33 crc kubenswrapper[4804]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 28 11:45:33 crc kubenswrapper[4804]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + cleanup_ovsdb_server_semaphore Jan 28 11:45:33 crc kubenswrapper[4804]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 11:45:33 crc kubenswrapper[4804]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 28 11:45:33 crc kubenswrapper[4804]: > pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" containerID="cri-o://b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.201123 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" containerID="cri-o://b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" gracePeriod=28 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.234210 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7d88fd9b89-w66bx"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.234825 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7d88fd9b89-w66bx" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-api" containerID="cri-o://5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.235498 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7d88fd9b89-w66bx" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-httpd" containerID="cri-o://789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.271960 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="rabbitmq" containerID="cri-o://a7bcd4c4937ab18a41cb4959a39743e78382843e721b78db4c0a6c20de518e0c" gracePeriod=604800 Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.285079 4804 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 11:45:33 crc kubenswrapper[4804]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: if [ -n "nova_api" ]; then Jan 28 11:45:33 crc kubenswrapper[4804]: GRANT_DATABASE="nova_api" Jan 28 11:45:33 crc kubenswrapper[4804]: else Jan 28 11:45:33 crc kubenswrapper[4804]: GRANT_DATABASE="*" Jan 28 11:45:33 crc kubenswrapper[4804]: fi Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: # going for maximum compatibility here: Jan 28 11:45:33 crc kubenswrapper[4804]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 28 11:45:33 crc kubenswrapper[4804]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 28 11:45:33 crc kubenswrapper[4804]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 28 11:45:33 crc kubenswrapper[4804]: # support updates Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: $MYSQL_CMD < logger="UnhandledError" Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.286238 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" podUID="8933c7a4-1e24-4de2-b302-1be9bc3c1e2d" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.369747 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-kcr62"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.389429 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.401035 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-kcr62"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.414101 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.414355 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-log" containerID="cri-o://5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.414805 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-metadata" containerID="cri-o://f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.434086 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-8f675b957-rm9qp"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.434395 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-8f675b957-rm9qp" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker-log" containerID="cri-o://f4726c69a403b9a8eefc4f17886ef00a383e10ea26adf572bdfed7ea1d3723a8" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.435168 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-8f675b957-rm9qp" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker" containerID="cri-o://1144f29504fe6195fc342eb320a4b830871f3e9d0216c4ce9fc167121dce473e" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.460690 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" containerID="cri-o://27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" gracePeriod=28 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.460839 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bd5b5bf44-5z4wx"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.461087 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api-log" containerID="cri-o://55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.461704 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api" containerID="cri-o://8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.476112 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-w8q7w"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.487083 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-w8q7w"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.494908 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-n6kfg"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.501850 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-n6kfg"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.509804 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5f7496d4bd-26fnt"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.510260 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener-log" containerID="cri-o://1fe685e535efd281a9b4cf9713641d9161c23425d8abe0134248a2395c6b7208" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.510918 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener" containerID="cri-o://bcbdcf39ea5a39e34418c6ab9208339d9f7fde2eca3c37cbb5806710252cf88b" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.545258 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8522-account-create-update-8fq2p"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.548108 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vmdbt"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.578199 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vmdbt"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.588417 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.588764 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-log" containerID="cri-o://61e2afaf3a01a165673d5c42d38ef739fe857b1c1e03f62547614151ae809226" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.589025 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-api" containerID="cri-o://5fce3701f770e3f1d822ae3950ad420da6c9b44d0df68cf4bd4c8ebf86d62649" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.593463 4804 secret.go:188] Couldn't get secret openstack/glance-default-internal-config-data: secret "glance-default-internal-config-data" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.593514 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:34.593500658 +0000 UTC m=+1410.388380642 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-default-internal-config-data" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.593660 4804 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.593723 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:34.593703914 +0000 UTC m=+1410.388583898 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-scripts" not found Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.595793 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mw42v"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.608640 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.608950 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b390f543-98da-46ea-b3b9-f68c09d94c03" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://67b5e53f1eb1c67a490461931a62e093efa88d74afa9352d1282f6ea7d2e449a" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.616724 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mw42v"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.623344 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gtg97_f7359aec-58b3-4254-8765-cdc131e5f912/openstack-network-exporter/0.log" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.623415 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.625153 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.635495 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-x5xnt"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.636152 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerName="galera" containerID="cri-o://351711c020c75334855ec428e2d1987910c3ce0fc9fe965d8ca2c554f8fb0ae9" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.643621 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-hhm9c"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.653771 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.672441 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-x5xnt"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.680980 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="rabbitmq" containerID="cri-o://95dfda03211e6c344c512015a17826e376bdb3ad7fb59bc5821bb495def03e2b" gracePeriod=604800 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.694126 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8522-account-create-update-8fq2p"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.702992 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.703228 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="469a0049-480f-4cde-848d-4b11cb54130b" containerName="nova-scheduler-scheduler" containerID="cri-o://df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.719319 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-hhm9c"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.796667 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fk9c\" (UniqueName: \"kubernetes.io/projected/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-kube-api-access-5fk9c\") pod \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.796794 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6kjd\" (UniqueName: \"kubernetes.io/projected/f7359aec-58b3-4254-8765-cdc131e5f912-kube-api-access-d6kjd\") pod \"f7359aec-58b3-4254-8765-cdc131e5f912\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.796839 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config-secret\") pod \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.796860 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config\") pod \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.796928 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovn-rundir\") pod \"f7359aec-58b3-4254-8765-cdc131e5f912\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.796995 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7359aec-58b3-4254-8765-cdc131e5f912-config\") pod \"f7359aec-58b3-4254-8765-cdc131e5f912\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.797019 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-metrics-certs-tls-certs\") pod \"f7359aec-58b3-4254-8765-cdc131e5f912\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.797058 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovs-rundir\") pod \"f7359aec-58b3-4254-8765-cdc131e5f912\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.797074 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-combined-ca-bundle\") pod \"f7359aec-58b3-4254-8765-cdc131e5f912\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.797128 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-combined-ca-bundle\") pod \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.797765 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "f7359aec-58b3-4254-8765-cdc131e5f912" (UID: "f7359aec-58b3-4254-8765-cdc131e5f912"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.797814 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "f7359aec-58b3-4254-8765-cdc131e5f912" (UID: "f7359aec-58b3-4254-8765-cdc131e5f912"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.798706 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7359aec-58b3-4254-8765-cdc131e5f912-config" (OuterVolumeSpecName: "config") pod "f7359aec-58b3-4254-8765-cdc131e5f912" (UID: "f7359aec-58b3-4254-8765-cdc131e5f912"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.815295 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-kube-api-access-5fk9c" (OuterVolumeSpecName: "kube-api-access-5fk9c") pod "eaba1c3c-49d4-498e-94b8-9c8cbe8660da" (UID: "eaba1c3c-49d4-498e-94b8-9c8cbe8660da"). InnerVolumeSpecName "kube-api-access-5fk9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.820023 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7359aec-58b3-4254-8765-cdc131e5f912-kube-api-access-d6kjd" (OuterVolumeSpecName: "kube-api-access-d6kjd") pod "f7359aec-58b3-4254-8765-cdc131e5f912" (UID: "f7359aec-58b3-4254-8765-cdc131e5f912"). InnerVolumeSpecName "kube-api-access-d6kjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.863566 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "eaba1c3c-49d4-498e-94b8-9c8cbe8660da" (UID: "eaba1c3c-49d4-498e-94b8-9c8cbe8660da"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.872241 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7359aec-58b3-4254-8765-cdc131e5f912" (UID: "f7359aec-58b3-4254-8765-cdc131e5f912"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900152 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fk9c\" (UniqueName: \"kubernetes.io/projected/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-kube-api-access-5fk9c\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900183 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6kjd\" (UniqueName: \"kubernetes.io/projected/f7359aec-58b3-4254-8765-cdc131e5f912-kube-api-access-d6kjd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900192 4804 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900201 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900209 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7359aec-58b3-4254-8765-cdc131e5f912-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900219 4804 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900226 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.938342 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.938843 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" containerName="nova-cell1-conductor-conductor" containerID="cri-o://87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.948775 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t5xcd"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.955369 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t5xcd"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.966962 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbth2"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.968036 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.968290 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="8cb48af9-edd2-404a-9d56-afedbfa79f07" containerName="nova-cell0-conductor-conductor" containerID="cri-o://fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.975002 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbth2"] Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.016043 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaba1c3c-49d4-498e-94b8-9c8cbe8660da" (UID: "eaba1c3c-49d4-498e-94b8-9c8cbe8660da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.016513 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "eaba1c3c-49d4-498e-94b8-9c8cbe8660da" (UID: "eaba1c3c-49d4-498e-94b8-9c8cbe8660da"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.018184 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gtg97_f7359aec-58b3-4254-8765-cdc131e5f912/openstack-network-exporter/0.log" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.018285 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gtg97" event={"ID":"f7359aec-58b3-4254-8765-cdc131e5f912","Type":"ContainerDied","Data":"79be63495c588808e23a67b45c37537f9b6477c73ecd4b8dd566e47b4bed3b9d"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.018338 4804 scope.go:117] "RemoveContainer" containerID="565156fe636372aa88e628b080b158f58c0e89d805ea93ee8a1f9e78b61b800b" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.019414 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.042803 4804 generic.go:334] "Generic (PLEG): container finished" podID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerID="c1345bf2b60adbef9b806636ee3887a5869fd85c14cb9679c394104f26a95a2c" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.042843 4804 generic.go:334] "Generic (PLEG): container finished" podID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerID="005c93d53e10abe220c87f4440097a40fcd2ee8a29f58966418aa864a302e6f7" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.042925 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e","Type":"ContainerDied","Data":"c1345bf2b60adbef9b806636ee3887a5869fd85c14cb9679c394104f26a95a2c"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.042951 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e","Type":"ContainerDied","Data":"005c93d53e10abe220c87f4440097a40fcd2ee8a29f58966418aa864a302e6f7"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.047153 4804 generic.go:334] "Generic (PLEG): container finished" podID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerID="61e2afaf3a01a165673d5c42d38ef739fe857b1c1e03f62547614151ae809226" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.047198 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae0fb199-797a-40c6-8c71-3b5a976b6c61","Type":"ContainerDied","Data":"61e2afaf3a01a165673d5c42d38ef739fe857b1c1e03f62547614151ae809226"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.054679 4804 generic.go:334] "Generic (PLEG): container finished" podID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerID="55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.054743 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" event={"ID":"bb3c1e4d-637e-4de6-aa37-7daff5298b30","Type":"ContainerDied","Data":"55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.063688 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c6c76352-2487-4098-bbee-579834052292/ovsdbserver-nb/0.log" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.063733 4804 generic.go:334] "Generic (PLEG): container finished" podID="c6c76352-2487-4098-bbee-579834052292" containerID="445cd2aea23cf7159b1e5bbce268d62cd1c9a1d5072f21d98a9181a420bf2e56" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.063784 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c6c76352-2487-4098-bbee-579834052292","Type":"ContainerDied","Data":"445cd2aea23cf7159b1e5bbce268d62cd1c9a1d5072f21d98a9181a420bf2e56"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.063926 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c6c76352-2487-4098-bbee-579834052292","Type":"ContainerDied","Data":"d26150d6a52bf056130226acaed9cb7292060f7026f2494687c5bc4ee4c04771"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.063955 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d26150d6a52bf056130226acaed9cb7292060f7026f2494687c5bc4ee4c04771" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.066475 4804 generic.go:334] "Generic (PLEG): container finished" podID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerID="45f239fc147b42454bdb77cdc16602cd03b54af32ff3e4a9b380a4fde2275f5c" exitCode=1 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.066527 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jqk9s" event={"ID":"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8","Type":"ContainerDied","Data":"45f239fc147b42454bdb77cdc16602cd03b54af32ff3e4a9b380a4fde2275f5c"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.067545 4804 scope.go:117] "RemoveContainer" containerID="45f239fc147b42454bdb77cdc16602cd03b54af32ff3e4a9b380a4fde2275f5c" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.076434 4804 generic.go:334] "Generic (PLEG): container finished" podID="eaba1c3c-49d4-498e-94b8-9c8cbe8660da" containerID="0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67" exitCode=137 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.076656 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.086334 4804 generic.go:334] "Generic (PLEG): container finished" podID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerID="91cc51ff2b7594ba6b7c5b83ef291bdad1767dd300aa27e2d6fe9a547161ad93" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.087357 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" event={"ID":"f7cab05f-efa6-4a74-920b-96f8f30f1736","Type":"ContainerDied","Data":"91cc51ff2b7594ba6b7c5b83ef291bdad1767dd300aa27e2d6fe9a547161ad93"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.087407 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" event={"ID":"f7cab05f-efa6-4a74-920b-96f8f30f1736","Type":"ContainerDied","Data":"02c33a94fee5850cffdcc1376e17adfb105d8ad41566dcf54330f1591b79ad5e"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.087422 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c33a94fee5850cffdcc1376e17adfb105d8ad41566dcf54330f1591b79ad5e" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.091573 4804 generic.go:334] "Generic (PLEG): container finished" podID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerID="54143a992b19966c4a0488e9860a42a6b4166527e948b9a61cc651fb19353896" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.091630 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659f7cffd6-wm9cj" event={"ID":"280cd1a0-6761-425c-8de1-bec2307ba0c0","Type":"ContainerDied","Data":"54143a992b19966c4a0488e9860a42a6b4166527e948b9a61cc651fb19353896"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094761 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_50c4ac86-3241-4cd1-aa15-9a36b6be1e03/ovsdbserver-sb/0.log" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094797 4804 generic.go:334] "Generic (PLEG): container finished" podID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerID="7501d75daa32f7ac9da494ff4510c6c7b84e72c6cd5d7a36b873ba97e31ca357" exitCode=2 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094810 4804 generic.go:334] "Generic (PLEG): container finished" podID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerID="1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094845 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"50c4ac86-3241-4cd1-aa15-9a36b6be1e03","Type":"ContainerDied","Data":"7501d75daa32f7ac9da494ff4510c6c7b84e72c6cd5d7a36b873ba97e31ca357"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094866 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"50c4ac86-3241-4cd1-aa15-9a36b6be1e03","Type":"ContainerDied","Data":"1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094875 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"50c4ac86-3241-4cd1-aa15-9a36b6be1e03","Type":"ContainerDied","Data":"25c9a781686743f7412ee94f0767d676a774f06512184aef56e510538efe72e7"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094943 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c9a781686743f7412ee94f0767d676a774f06512184aef56e510538efe72e7" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.097958 4804 generic.go:334] "Generic (PLEG): container finished" podID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerID="5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.098010 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0bfaf6b-2c74-4812-965a-4db80f0c4527","Type":"ContainerDied","Data":"5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.103813 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.103838 4804 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.104662 4804 generic.go:334] "Generic (PLEG): container finished" podID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.104735 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerDied","Data":"b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.116085 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" event={"ID":"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d","Type":"ContainerStarted","Data":"314f7cccb05be770227b06402be77c91999b3a0c06e5100b791025a241c569ff"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.140281 4804 generic.go:334] "Generic (PLEG): container finished" podID="878daeff-34bf-4dab-8118-e42c318849bb" containerID="f4726c69a403b9a8eefc4f17886ef00a383e10ea26adf572bdfed7ea1d3723a8" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.140388 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f675b957-rm9qp" event={"ID":"878daeff-34bf-4dab-8118-e42c318849bb","Type":"ContainerDied","Data":"f4726c69a403b9a8eefc4f17886ef00a383e10ea26adf572bdfed7ea1d3723a8"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.149080 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="b390f543-98da-46ea-b3b9-f68c09d94c03" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.201:6080/vnc_lite.html\": dial tcp 10.217.0.201:6080: connect: connection refused" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.158771 4804 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 11:45:34 crc kubenswrapper[4804]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: if [ -n "nova_api" ]; then Jan 28 11:45:34 crc kubenswrapper[4804]: GRANT_DATABASE="nova_api" Jan 28 11:45:34 crc kubenswrapper[4804]: else Jan 28 11:45:34 crc kubenswrapper[4804]: GRANT_DATABASE="*" Jan 28 11:45:34 crc kubenswrapper[4804]: fi Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: # going for maximum compatibility here: Jan 28 11:45:34 crc kubenswrapper[4804]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 28 11:45:34 crc kubenswrapper[4804]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 28 11:45:34 crc kubenswrapper[4804]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 28 11:45:34 crc kubenswrapper[4804]: # support updates Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: $MYSQL_CMD < logger="UnhandledError" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.159839 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" podUID="8933c7a4-1e24-4de2-b302-1be9bc3c1e2d" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.208156 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f7359aec-58b3-4254-8765-cdc131e5f912" (UID: "f7359aec-58b3-4254-8765-cdc131e5f912"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209614 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209634 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209641 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209647 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209654 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209659 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209665 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209671 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209677 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209683 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209688 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209694 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209700 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209736 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209760 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209770 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209778 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209786 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209794 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209802 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209810 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209818 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209828 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209836 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209845 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209854 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.211167 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8522-account-create-update-8fq2p" event={"ID":"12271c96-a234-46d8-bc32-80db78339116","Type":"ContainerStarted","Data":"892941696eeaa5b4b2629f40cb73dfe018b72b2636b79c4b93c9f6f01ce6184e"} Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.216934 4804 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 11:45:34 crc kubenswrapper[4804]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: if [ -n "barbican" ]; then Jan 28 11:45:34 crc kubenswrapper[4804]: GRANT_DATABASE="barbican" Jan 28 11:45:34 crc kubenswrapper[4804]: else Jan 28 11:45:34 crc kubenswrapper[4804]: GRANT_DATABASE="*" Jan 28 11:45:34 crc kubenswrapper[4804]: fi Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: # going for maximum compatibility here: Jan 28 11:45:34 crc kubenswrapper[4804]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 28 11:45:34 crc kubenswrapper[4804]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 28 11:45:34 crc kubenswrapper[4804]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 28 11:45:34 crc kubenswrapper[4804]: # support updates Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: $MYSQL_CMD < logger="UnhandledError" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.218943 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-8522-account-create-update-8fq2p" podUID="12271c96-a234-46d8-bc32-80db78339116" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.231720 4804 generic.go:334] "Generic (PLEG): container finished" podID="5198da96-d6b6-4b80-bb93-838dff10730e" containerID="49f9909506d8a2c0b51ffca97a1e1ce6efc0b0acde0bbf32f3a77e33e0c7d096" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.231868 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5198da96-d6b6-4b80-bb93-838dff10730e","Type":"ContainerDied","Data":"49f9909506d8a2c0b51ffca97a1e1ce6efc0b0acde0bbf32f3a77e33e0c7d096"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.236985 4804 generic.go:334] "Generic (PLEG): container finished" podID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerID="1fe685e535efd281a9b4cf9713641d9161c23425d8abe0134248a2395c6b7208" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.237077 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" event={"ID":"82ef8b43-de59-45f8-9c2a-765c5709054b","Type":"ContainerDied","Data":"1fe685e535efd281a9b4cf9713641d9161c23425d8abe0134248a2395c6b7208"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.238441 4804 generic.go:334] "Generic (PLEG): container finished" podID="095bc753-88c4-456c-a3ae-aa0040a76338" containerID="789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.238609 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-log" containerID="cri-o://53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001" gracePeriod=30 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.238834 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d88fd9b89-w66bx" event={"ID":"095bc753-88c4-456c-a3ae-aa0040a76338","Type":"ContainerDied","Data":"789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.239195 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-httpd" containerID="cri-o://9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63" gracePeriod=30 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.307517 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.384858 4804 scope.go:117] "RemoveContainer" containerID="0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.418405 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c6c76352-2487-4098-bbee-579834052292/ovsdbserver-nb/0.log" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.418475 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.461142 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_50c4ac86-3241-4cd1-aa15-9a36b6be1e03/ovsdbserver-sb/0.log" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.490151 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.507572 4804 scope.go:117] "RemoveContainer" containerID="0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.508753 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67\": container with ID starting with 0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67 not found: ID does not exist" containerID="0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.508806 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67"} err="failed to get container status \"0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67\": rpc error: code = NotFound desc = could not find container \"0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67\": container with ID starting with 0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67 not found: ID does not exist" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.512991 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.513266 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.516733 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.517298 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6c76352-2487-4098-bbee-579834052292-ovsdb-rundir\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.517355 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9rrd\" (UniqueName: \"kubernetes.io/projected/c6c76352-2487-4098-bbee-579834052292-kube-api-access-x9rrd\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.517436 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-combined-ca-bundle\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.517516 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-ovsdbserver-nb-tls-certs\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.517807 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-config\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.517927 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-scripts\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.518581 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-scripts" (OuterVolumeSpecName: "scripts") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.518582 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-metrics-certs-tls-certs\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.518688 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.520215 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.520936 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-config" (OuterVolumeSpecName: "config") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.558165 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.558622 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6c76352-2487-4098-bbee-579834052292-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.564758 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.564813 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="469a0049-480f-4cde-848d-4b11cb54130b" containerName="nova-scheduler-scheduler" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.572868 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c76352-2487-4098-bbee-579834052292-kube-api-access-x9rrd" (OuterVolumeSpecName: "kube-api-access-x9rrd") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "kube-api-access-x9rrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.585127 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.585219 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-gtg97"] Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.614895 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628318 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-sb\") pod \"f7cab05f-efa6-4a74-920b-96f8f30f1736\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628378 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-swift-storage-0\") pod \"f7cab05f-efa6-4a74-920b-96f8f30f1736\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628463 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-scripts\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628518 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-combined-ca-bundle\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628593 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-metrics-certs-tls-certs\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628628 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628662 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mpf5\" (UniqueName: \"kubernetes.io/projected/f7cab05f-efa6-4a74-920b-96f8f30f1736-kube-api-access-4mpf5\") pod \"f7cab05f-efa6-4a74-920b-96f8f30f1736\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628686 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-config\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628709 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-config\") pod \"f7cab05f-efa6-4a74-920b-96f8f30f1736\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628753 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-svc\") pod \"f7cab05f-efa6-4a74-920b-96f8f30f1736\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628861 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdb-rundir\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628900 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp6hq\" (UniqueName: \"kubernetes.io/projected/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-kube-api-access-lp6hq\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628930 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdbserver-sb-tls-certs\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628952 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-nb\") pod \"f7cab05f-efa6-4a74-920b-96f8f30f1736\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.629400 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.629432 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.629444 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6c76352-2487-4098-bbee-579834052292-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.629458 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9rrd\" (UniqueName: \"kubernetes.io/projected/c6c76352-2487-4098-bbee-579834052292-kube-api-access-x9rrd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.629469 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.630920 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-config" (OuterVolumeSpecName: "config") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.637413 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-scripts" (OuterVolumeSpecName: "scripts") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.637482 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-gtg97"] Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.643846 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.644300 4804 secret.go:188] Couldn't get secret openstack/glance-default-internal-config-data: secret "glance-default-internal-config-data" not found Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.644357 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:36.644339636 +0000 UTC m=+1412.439219620 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-default-internal-config-data" not found Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.645018 4804 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.645180 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:36.645161581 +0000 UTC m=+1412.440041565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-scripts" not found Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.674058 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-kube-api-access-lp6hq" (OuterVolumeSpecName: "kube-api-access-lp6hq") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "kube-api-access-lp6hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.704580 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7cab05f-efa6-4a74-920b-96f8f30f1736-kube-api-access-4mpf5" (OuterVolumeSpecName: "kube-api-access-4mpf5") pod "f7cab05f-efa6-4a74-920b-96f8f30f1736" (UID: "f7cab05f-efa6-4a74-920b-96f8f30f1736"). InnerVolumeSpecName "kube-api-access-4mpf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.704643 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.730668 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-etc-machine-id\") pod \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.730751 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data\") pod \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.730774 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-combined-ca-bundle\") pod \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.730864 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4htk\" (UniqueName: \"kubernetes.io/projected/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-kube-api-access-j4htk\") pod \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.730931 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-scripts\") pod \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731011 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data-custom\") pod \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731439 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731468 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp6hq\" (UniqueName: \"kubernetes.io/projected/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-kube-api-access-lp6hq\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731479 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731500 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731509 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mpf5\" (UniqueName: \"kubernetes.io/projected/f7cab05f-efa6-4a74-920b-96f8f30f1736-kube-api-access-4mpf5\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731517 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.734917 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": read tcp 10.217.0.2:50254->10.217.0.168:8776: read: connection reset by peer" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.735551 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.735631 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data podName:76d127f1-97d9-4552-9bdb-b3482a45951d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:38.735604931 +0000 UTC m=+1414.530484915 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data") pod "rabbitmq-cell1-server-0" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d") : configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.736761 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" (UID: "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.769607 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.820147 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" (UID: "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.835113 4804 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.835137 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.835146 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.840192 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-scripts" (OuterVolumeSpecName: "scripts") pod "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" (UID: "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.840207 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-kube-api-access-j4htk" (OuterVolumeSpecName: "kube-api-access-j4htk") pod "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" (UID: "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e"). InnerVolumeSpecName "kube-api-access-j4htk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.913125 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.937276 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4htk\" (UniqueName: \"kubernetes.io/projected/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-kube-api-access-j4htk\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.937306 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.937322 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.948415 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ea6e04-5420-4f5b-911f-cdaede8220ab" path="/var/lib/kubelet/pods/04ea6e04-5420-4f5b-911f-cdaede8220ab/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.949517 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08795da4-549f-437a-9113-51d1003b5668" path="/var/lib/kubelet/pods/08795da4-549f-437a-9113-51d1003b5668/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.950480 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b33b00-9642-45dc-8256-5db39ca166f1" path="/var/lib/kubelet/pods/18b33b00-9642-45dc-8256-5db39ca166f1/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.955576 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359ecb47-f044-4273-8589-c0ceedb367b5" path="/var/lib/kubelet/pods/359ecb47-f044-4273-8589-c0ceedb367b5/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.956173 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a68429-2ef0-45da-8a73-62231d018738" path="/var/lib/kubelet/pods/47a68429-2ef0-45da-8a73-62231d018738/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.956713 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="518f34a2-84c4-4115-a28d-0251d0fa8064" path="/var/lib/kubelet/pods/518f34a2-84c4-4115-a28d-0251d0fa8064/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.957318 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e2ade0c-9218-4f08-b78f-b6b6ede461f7" path="/var/lib/kubelet/pods/5e2ade0c-9218-4f08-b78f-b6b6ede461f7/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.958577 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1029fc-e131-4d00-b538-6f0a17674c75" path="/var/lib/kubelet/pods/8b1029fc-e131-4d00-b538-6f0a17674c75/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.959194 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="903b6b99-b94d-428a-9c9c-7465ef27ad40" path="/var/lib/kubelet/pods/903b6b99-b94d-428a-9c9c-7465ef27ad40/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.959780 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc6a2a42-6519-46c6-bb24-074e5096001f" path="/var/lib/kubelet/pods/dc6a2a42-6519-46c6-bb24-074e5096001f/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.966068 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaba1c3c-49d4-498e-94b8-9c8cbe8660da" path="/var/lib/kubelet/pods/eaba1c3c-49d4-498e-94b8-9c8cbe8660da/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.966817 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f35650b1-56b4-49fb-9ecc-9aa90a1386db" path="/var/lib/kubelet/pods/f35650b1-56b4-49fb-9ecc-9aa90a1386db/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.967406 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7359aec-58b3-4254-8765-cdc131e5f912" path="/var/lib/kubelet/pods/f7359aec-58b3-4254-8765-cdc131e5f912/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.970661 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.047340 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.050671 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7cab05f-efa6-4a74-920b-96f8f30f1736" (UID: "f7cab05f-efa6-4a74-920b-96f8f30f1736"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.053016 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.074341 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-config" (OuterVolumeSpecName: "config") pod "f7cab05f-efa6-4a74-920b-96f8f30f1736" (UID: "f7cab05f-efa6-4a74-920b-96f8f30f1736"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.092661 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.116867 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f7cab05f-efa6-4a74-920b-96f8f30f1736" (UID: "f7cab05f-efa6-4a74-920b-96f8f30f1736"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.121024 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" (UID: "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141137 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141169 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141178 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141188 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141197 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141206 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: E0128 11:45:35.141267 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141308 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: E0128 11:45:35.141362 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data podName:f7c5c969-c4c2-4f76-b3c6-152473159e78 nodeName:}" failed. No retries permitted until 2026-01-28 11:45:39.141343889 +0000 UTC m=+1414.936223873 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data") pod "rabbitmq-server-0" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78") : configmap "rabbitmq-config-data" not found Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141708 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f7cab05f-efa6-4a74-920b-96f8f30f1736" (UID: "f7cab05f-efa6-4a74-920b-96f8f30f1736"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.157375 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.176753 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.194530 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f7cab05f-efa6-4a74-920b-96f8f30f1736" (UID: "f7cab05f-efa6-4a74-920b-96f8f30f1736"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.243110 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.243141 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.243153 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.243163 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.250810 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data" (OuterVolumeSpecName: "config-data") pod "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" (UID: "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.252717 4804 generic.go:334] "Generic (PLEG): container finished" podID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerID="53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001" exitCode=143 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.255429 4804 generic.go:334] "Generic (PLEG): container finished" podID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerID="d91ad709364755c2f101045dffe047be8b4a0f8b3fefadd5603f62974e04e888" exitCode=1 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.256283 4804 scope.go:117] "RemoveContainer" containerID="d91ad709364755c2f101045dffe047be8b4a0f8b3fefadd5603f62974e04e888" Jan 28 11:45:35 crc kubenswrapper[4804]: E0128 11:45:35.256819 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-jqk9s_openstack(be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8)\"" pod="openstack/root-account-create-update-jqk9s" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.258173 4804 generic.go:334] "Generic (PLEG): container finished" podID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerID="351711c020c75334855ec428e2d1987910c3ce0fc9fe965d8ca2c554f8fb0ae9" exitCode=0 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.261056 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.264700 4804 generic.go:334] "Generic (PLEG): container finished" podID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerID="7b625bd5e08a3fff2579118cb1bfb0f02e6d6eda2e30f536589d6d7b53d87774" exitCode=0 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.266020 4804 generic.go:334] "Generic (PLEG): container finished" podID="b390f543-98da-46ea-b3b9-f68c09d94c03" containerID="67b5e53f1eb1c67a490461931a62e093efa88d74afa9352d1282f6ea7d2e449a" exitCode=0 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.269689 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.269917 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.277250 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336307 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f","Type":"ContainerDied","Data":"53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336690 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-59fb5cbd47-wwqmq"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336723 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jqk9s" event={"ID":"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8","Type":"ContainerDied","Data":"d91ad709364755c2f101045dffe047be8b4a0f8b3fefadd5603f62974e04e888"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336740 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24549b02-2977-49ee-8f25-a6ed25e523d1","Type":"ContainerDied","Data":"351711c020c75334855ec428e2d1987910c3ce0fc9fe965d8ca2c554f8fb0ae9"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336757 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e","Type":"ContainerDied","Data":"4cc14b4a4b262ffd7dca6ce3a4c78be1958d2621d179512804ce0187bc8fd56e"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336773 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04cc886c-66ef-4b91-87cf-1f9fe5de8081","Type":"ContainerDied","Data":"7b625bd5e08a3fff2579118cb1bfb0f02e6d6eda2e30f536589d6d7b53d87774"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336788 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b390f543-98da-46ea-b3b9-f68c09d94c03","Type":"ContainerDied","Data":"67b5e53f1eb1c67a490461931a62e093efa88d74afa9352d1282f6ea7d2e449a"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336802 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b390f543-98da-46ea-b3b9-f68c09d94c03","Type":"ContainerDied","Data":"3d6b0e8a60f6d64a7898369a58401894b066ffaf5a9e53838f90370bc8ff4841"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336814 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d6b0e8a60f6d64a7898369a58401894b066ffaf5a9e53838f90370bc8ff4841" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336834 4804 scope.go:117] "RemoveContainer" containerID="45f239fc147b42454bdb77cdc16602cd03b54af32ff3e4a9b380a4fde2275f5c" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.343409 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-httpd" containerID="cri-o://6e4df9959650dab13d28cc4f6579b5bbae4ec71560a9da89f85150e891a84a60" gracePeriod=30 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.343853 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-server" containerID="cri-o://b44257342c1561f0cc777c6fe14a814950eb25277bf5e95e9adf49e4a763d6fa" gracePeriod=30 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.350159 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.464929 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.512020 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.516714 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.552121 4804 scope.go:117] "RemoveContainer" containerID="c1345bf2b60adbef9b806636ee3887a5869fd85c14cb9679c394104f26a95a2c" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.555561 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-j9ld2"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556211 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-internal-tls-certs\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556268 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8gt7\" (UniqueName: \"kubernetes.io/projected/24549b02-2977-49ee-8f25-a6ed25e523d1-kube-api-access-r8gt7\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556312 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-kolla-config\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556367 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc886c-66ef-4b91-87cf-1f9fe5de8081-etc-machine-id\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556387 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-combined-ca-bundle\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556424 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps6sv\" (UniqueName: \"kubernetes.io/projected/b390f543-98da-46ea-b3b9-f68c09d94c03-kube-api-access-ps6sv\") pod \"b390f543-98da-46ea-b3b9-f68c09d94c03\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556453 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556474 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-operator-scripts\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556538 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-config-data\") pod \"b390f543-98da-46ea-b3b9-f68c09d94c03\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556581 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-public-tls-certs\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556601 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cc886c-66ef-4b91-87cf-1f9fe5de8081-logs\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556629 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-scripts\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556651 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556666 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-vencrypt-tls-certs\") pod \"b390f543-98da-46ea-b3b9-f68c09d94c03\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556701 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data-custom\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556731 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8jjj\" (UniqueName: \"kubernetes.io/projected/04cc886c-66ef-4b91-87cf-1f9fe5de8081-kube-api-access-k8jjj\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556751 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-default\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556766 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-generated\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556806 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-nova-novncproxy-tls-certs\") pod \"b390f543-98da-46ea-b3b9-f68c09d94c03\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556847 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-galera-tls-certs\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556875 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-combined-ca-bundle\") pod \"b390f543-98da-46ea-b3b9-f68c09d94c03\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556913 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-combined-ca-bundle\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.560003 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04cc886c-66ef-4b91-87cf-1f9fe5de8081-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.561299 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.562672 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.572090 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24549b02-2977-49ee-8f25-a6ed25e523d1-kube-api-access-r8gt7" (OuterVolumeSpecName: "kube-api-access-r8gt7") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "kube-api-access-r8gt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.572689 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.573446 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04cc886c-66ef-4b91-87cf-1f9fe5de8081-logs" (OuterVolumeSpecName: "logs") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.574065 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b390f543-98da-46ea-b3b9-f68c09d94c03-kube-api-access-ps6sv" (OuterVolumeSpecName: "kube-api-access-ps6sv") pod "b390f543-98da-46ea-b3b9-f68c09d94c03" (UID: "b390f543-98da-46ea-b3b9-f68c09d94c03"). InnerVolumeSpecName "kube-api-access-ps6sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.576170 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.586558 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.606123 4804 scope.go:117] "RemoveContainer" containerID="005c93d53e10abe220c87f4440097a40fcd2ee8a29f58966418aa864a302e6f7" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.611724 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04cc886c-66ef-4b91-87cf-1f9fe5de8081-kube-api-access-k8jjj" (OuterVolumeSpecName: "kube-api-access-k8jjj") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "kube-api-access-k8jjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.611988 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-scripts" (OuterVolumeSpecName: "scripts") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.623012 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-j9ld2"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.649493 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.652220 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660196 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8gt7\" (UniqueName: \"kubernetes.io/projected/24549b02-2977-49ee-8f25-a6ed25e523d1-kube-api-access-r8gt7\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660226 4804 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660235 4804 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc886c-66ef-4b91-87cf-1f9fe5de8081-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660244 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660253 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps6sv\" (UniqueName: \"kubernetes.io/projected/b390f543-98da-46ea-b3b9-f68c09d94c03-kube-api-access-ps6sv\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660284 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660293 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660302 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cc886c-66ef-4b91-87cf-1f9fe5de8081-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660310 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660317 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660326 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8jjj\" (UniqueName: \"kubernetes.io/projected/04cc886c-66ef-4b91-87cf-1f9fe5de8081-kube-api-access-k8jjj\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660335 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660345 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.751338 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b390f543-98da-46ea-b3b9-f68c09d94c03" (UID: "b390f543-98da-46ea-b3b9-f68c09d94c03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.751692 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.754096 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.763083 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.763113 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.785850 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.802514 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.810695 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.819029 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.819071 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.846520 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.854232 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "b390f543-98da-46ea-b3b9-f68c09d94c03" (UID: "b390f543-98da-46ea-b3b9-f68c09d94c03"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.865110 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.865452 4804 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.865476 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.865492 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.868551 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data" (OuterVolumeSpecName: "config-data") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.899040 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-config-data" (OuterVolumeSpecName: "config-data") pod "b390f543-98da-46ea-b3b9-f68c09d94c03" (UID: "b390f543-98da-46ea-b3b9-f68c09d94c03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.921462 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "b390f543-98da-46ea-b3b9-f68c09d94c03" (UID: "b390f543-98da-46ea-b3b9-f68c09d94c03"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.922688 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.934060 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-659f7cffd6-wm9cj" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.157:8778/\": read tcp 10.217.0.2:45092->10.217.0.157:8778: read: connection reset by peer" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.934451 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-659f7cffd6-wm9cj" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.157:8778/\": read tcp 10.217.0.2:45094->10.217.0.157:8778: read: connection reset by peer" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.935254 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.955535 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.960099 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.966035 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-operator-scripts\") pod \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.966216 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kqkz\" (UniqueName: \"kubernetes.io/projected/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-kube-api-access-2kqkz\") pod \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.966667 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.966685 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.966695 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.966704 4804 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.967976 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8933c7a4-1e24-4de2-b302-1be9bc3c1e2d" (UID: "8933c7a4-1e24-4de2-b302-1be9bc3c1e2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.971686 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.973713 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-kube-api-access-2kqkz" (OuterVolumeSpecName: "kube-api-access-2kqkz") pod "8933c7a4-1e24-4de2-b302-1be9bc3c1e2d" (UID: "8933c7a4-1e24-4de2-b302-1be9bc3c1e2d"). InnerVolumeSpecName "kube-api-access-2kqkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.067688 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12271c96-a234-46d8-bc32-80db78339116-operator-scripts\") pod \"12271c96-a234-46d8-bc32-80db78339116\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.067748 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69bqc\" (UniqueName: \"kubernetes.io/projected/12271c96-a234-46d8-bc32-80db78339116-kube-api-access-69bqc\") pod \"12271c96-a234-46d8-bc32-80db78339116\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.068212 4804 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.068235 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kqkz\" (UniqueName: \"kubernetes.io/projected/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-kube-api-access-2kqkz\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.068247 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.069075 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12271c96-a234-46d8-bc32-80db78339116-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12271c96-a234-46d8-bc32-80db78339116" (UID: "12271c96-a234-46d8-bc32-80db78339116"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.076150 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12271c96-a234-46d8-bc32-80db78339116-kube-api-access-69bqc" (OuterVolumeSpecName: "kube-api-access-69bqc") pod "12271c96-a234-46d8-bc32-80db78339116" (UID: "12271c96-a234-46d8-bc32-80db78339116"). InnerVolumeSpecName "kube-api-access-69bqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.171314 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12271c96-a234-46d8-bc32-80db78339116-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.171359 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69bqc\" (UniqueName: \"kubernetes.io/projected/12271c96-a234-46d8-bc32-80db78339116-kube-api-access-69bqc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.329601 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04cc886c-66ef-4b91-87cf-1f9fe5de8081","Type":"ContainerDied","Data":"6b06f838e59a73b485a69b93f766b0fb460afb06549c4aa004f7bac68fc724cc"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.329662 4804 scope.go:117] "RemoveContainer" containerID="7b625bd5e08a3fff2579118cb1bfb0f02e6d6eda2e30f536589d6d7b53d87774" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.329833 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.351401 4804 generic.go:334] "Generic (PLEG): container finished" podID="8cb48af9-edd2-404a-9d56-afedbfa79f07" containerID="fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441" exitCode=0 Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.351508 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8cb48af9-edd2-404a-9d56-afedbfa79f07","Type":"ContainerDied","Data":"fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.377179 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8522-account-create-update-8fq2p" event={"ID":"12271c96-a234-46d8-bc32-80db78339116","Type":"ContainerDied","Data":"892941696eeaa5b4b2629f40cb73dfe018b72b2636b79c4b93c9f6f01ce6184e"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.377567 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.414138 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" event={"ID":"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d","Type":"ContainerDied","Data":"314f7cccb05be770227b06402be77c91999b3a0c06e5100b791025a241c569ff"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.414232 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.445292 4804 generic.go:334] "Generic (PLEG): container finished" podID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerID="2bc2f4bef5b6e11721d8eabaa519e6625f7ff953fd015c6be0cebef1e6ec65fa" exitCode=0 Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.445621 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659f7cffd6-wm9cj" event={"ID":"280cd1a0-6761-425c-8de1-bec2307ba0c0","Type":"ContainerDied","Data":"2bc2f4bef5b6e11721d8eabaa519e6625f7ff953fd015c6be0cebef1e6ec65fa"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.460595 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.471115 4804 scope.go:117] "RemoveContainer" containerID="d91ad709364755c2f101045dffe047be8b4a0f8b3fefadd5603f62974e04e888" Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.471785 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-jqk9s_openstack(be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8)\"" pod="openstack/root-account-create-update-jqk9s" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.473147 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.523701 4804 scope.go:117] "RemoveContainer" containerID="b7a5a299ea638aff1b67f737be31752dbc58e62ca2663d443117c669ac5e859a" Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.527968 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441 is running failed: container process not found" containerID="fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.531406 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441 is running failed: container process not found" containerID="fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.535604 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441 is running failed: container process not found" containerID="fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.535656 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="8cb48af9-edd2-404a-9d56-afedbfa79f07" containerName="nova-cell0-conductor-conductor" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.544361 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8522-account-create-update-8fq2p"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.559153 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24549b02-2977-49ee-8f25-a6ed25e523d1","Type":"ContainerDied","Data":"b7ae3c5cd3fc37a1e5fff03dc9d1c7b30b19148db61f795f2d045d947ed549b4"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.559264 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.580826 4804 generic.go:334] "Generic (PLEG): container finished" podID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerID="b44257342c1561f0cc777c6fe14a814950eb25277bf5e95e9adf49e4a763d6fa" exitCode=0 Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.581128 4804 generic.go:334] "Generic (PLEG): container finished" podID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerID="6e4df9959650dab13d28cc4f6579b5bbae4ec71560a9da89f85150e891a84a60" exitCode=0 Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.581346 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" event={"ID":"fcc0e969-75e0-4441-a805-7845261f1ad5","Type":"ContainerDied","Data":"b44257342c1561f0cc777c6fe14a814950eb25277bf5e95e9adf49e4a763d6fa"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.581406 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" event={"ID":"fcc0e969-75e0-4441-a805-7845261f1ad5","Type":"ContainerDied","Data":"6e4df9959650dab13d28cc4f6579b5bbae4ec71560a9da89f85150e891a84a60"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.581588 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.638414 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8522-account-create-update-8fq2p"] Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.659615 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.675278 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.675418 4804 scope.go:117] "RemoveContainer" containerID="351711c020c75334855ec428e2d1987910c3ce0fc9fe965d8ca2c554f8fb0ae9" Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.683341 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.683764 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="ovn-northd" Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.684793 4804 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.684901 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:40.684837336 +0000 UTC m=+1416.479717320 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-scripts" not found Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.684947 4804 secret.go:188] Couldn't get secret openstack/glance-default-internal-config-data: secret "glance-default-internal-config-data" not found Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.685102 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:40.685090073 +0000 UTC m=+1416.479970087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-default-internal-config-data" not found Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.698080 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-hhm9c"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.698672 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.171:8080/healthcheck\": dial tcp 10.217.0.171:8080: connect: connection refused" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.698750 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.171:8080/healthcheck\": dial tcp 10.217.0.171:8080: connect: connection refused" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.703389 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-hhm9c"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.709311 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": read tcp 10.217.0.2:48414->10.217.0.207:8775: read: connection reset by peer" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.709454 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": read tcp 10.217.0.2:48426->10.217.0.207:8775: read: connection reset by peer" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.712162 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.745161 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.757351 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:52218->10.217.0.162:9311: read: connection reset by peer" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.757488 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:52234->10.217.0.162:9311: read: connection reset by peer" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.771143 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.777158 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.939054 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" path="/var/lib/kubelet/pods/04cc886c-66ef-4b91-87cf-1f9fe5de8081/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.939779 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12271c96-a234-46d8-bc32-80db78339116" path="/var/lib/kubelet/pods/12271c96-a234-46d8-bc32-80db78339116/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.940439 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" path="/var/lib/kubelet/pods/24549b02-2977-49ee-8f25-a6ed25e523d1/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.955181 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" path="/var/lib/kubelet/pods/50c4ac86-3241-4cd1-aa15-9a36b6be1e03/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.956174 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" path="/var/lib/kubelet/pods/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.957121 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8933c7a4-1e24-4de2-b302-1be9bc3c1e2d" path="/var/lib/kubelet/pods/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.958231 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b390f543-98da-46ea-b3b9-f68c09d94c03" path="/var/lib/kubelet/pods/b390f543-98da-46ea-b3b9-f68c09d94c03/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.959029 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c76352-2487-4098-bbee-579834052292" path="/var/lib/kubelet/pods/c6c76352-2487-4098-bbee-579834052292/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.959666 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" path="/var/lib/kubelet/pods/f7cab05f-efa6-4a74-920b-96f8f30f1736/volumes" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.093208 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.094747 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.106416 4804 scope.go:117] "RemoveContainer" containerID="71511ac2cacaf27ae221597c51e8a13319dc222d2cd450901bd6db686f0e4b92" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.195304 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-combined-ca-bundle\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.196628 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-config-data\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.196682 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280cd1a0-6761-425c-8de1-bec2307ba0c0-logs\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.196721 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-internal-tls-certs\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.196754 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-public-tls-certs\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.196776 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-scripts\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.196794 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t74n8\" (UniqueName: \"kubernetes.io/projected/8cb48af9-edd2-404a-9d56-afedbfa79f07-kube-api-access-t74n8\") pod \"8cb48af9-edd2-404a-9d56-afedbfa79f07\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.197046 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/280cd1a0-6761-425c-8de1-bec2307ba0c0-kube-api-access-ffjll\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.197368 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-combined-ca-bundle\") pod \"8cb48af9-edd2-404a-9d56-afedbfa79f07\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.197467 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-config-data\") pod \"8cb48af9-edd2-404a-9d56-afedbfa79f07\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.201519 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/280cd1a0-6761-425c-8de1-bec2307ba0c0-logs" (OuterVolumeSpecName: "logs") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.236136 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/280cd1a0-6761-425c-8de1-bec2307ba0c0-kube-api-access-ffjll" (OuterVolumeSpecName: "kube-api-access-ffjll") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "kube-api-access-ffjll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.237602 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-scripts" (OuterVolumeSpecName: "scripts") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.249480 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb48af9-edd2-404a-9d56-afedbfa79f07-kube-api-access-t74n8" (OuterVolumeSpecName: "kube-api-access-t74n8") pod "8cb48af9-edd2-404a-9d56-afedbfa79f07" (UID: "8cb48af9-edd2-404a-9d56-afedbfa79f07"). InnerVolumeSpecName "kube-api-access-t74n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.271093 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cb48af9-edd2-404a-9d56-afedbfa79f07" (UID: "8cb48af9-edd2-404a-9d56-afedbfa79f07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.297114 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-config-data" (OuterVolumeSpecName: "config-data") pod "8cb48af9-edd2-404a-9d56-afedbfa79f07" (UID: "8cb48af9-edd2-404a-9d56-afedbfa79f07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.301244 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280cd1a0-6761-425c-8de1-bec2307ba0c0-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.301370 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.301426 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t74n8\" (UniqueName: \"kubernetes.io/projected/8cb48af9-edd2-404a-9d56-afedbfa79f07-kube-api-access-t74n8\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.301481 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/280cd1a0-6761-425c-8de1-bec2307ba0c0-kube-api-access-ffjll\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.301533 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.301606 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.318495 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.354017 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.386212 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-config-data" (OuterVolumeSpecName: "config-data") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.416526 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.416556 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.425229 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.468778 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.475396 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.475747 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-central-agent" containerID="cri-o://e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.475859 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="sg-core" containerID="cri-o://4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.475872 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="proxy-httpd" containerID="cri-o://1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.475893 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-notification-agent" containerID="cri-o://e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.493903 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.517222 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-log-httpd\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.517294 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-combined-ca-bundle\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.517359 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-etc-swift\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.517384 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-public-tls-certs\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.517509 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-config-data\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.518819 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6vwc\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-kube-api-access-z6vwc\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.518916 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-internal-tls-certs\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.518973 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-run-httpd\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.519619 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.524063 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.526991 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.527503 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.528322 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-kube-api-access-z6vwc" (OuterVolumeSpecName: "kube-api-access-z6vwc") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "kube-api-access-z6vwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.528972 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.579246 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.579471 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" containerName="kube-state-metrics" containerID="cri-o://ccddc2c43c4ec70519371e0d1f04a70d45a5a33973b05eef166b2e2189b30710" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620243 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-nova-metadata-tls-certs\") pod \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620301 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-combined-ca-bundle\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620409 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-config-data\") pod \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620441 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67wq4\" (UniqueName: \"kubernetes.io/projected/b0bfaf6b-2c74-4812-965a-4db80f0c4527-kube-api-access-67wq4\") pod \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620466 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-combined-ca-bundle\") pod \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620487 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-public-tls-certs\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620528 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620547 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkk99\" (UniqueName: \"kubernetes.io/projected/bb3c1e4d-637e-4de6-aa37-7daff5298b30-kube-api-access-qkk99\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620635 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3c1e4d-637e-4de6-aa37-7daff5298b30-logs\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620661 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data-custom\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620681 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-internal-tls-certs\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620700 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0bfaf6b-2c74-4812-965a-4db80f0c4527-logs\") pod \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.621226 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6vwc\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-kube-api-access-z6vwc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.621244 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.621254 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.621265 4804 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.621276 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.626257 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0bfaf6b-2c74-4812-965a-4db80f0c4527-logs" (OuterVolumeSpecName: "logs") pod "b0bfaf6b-2c74-4812-965a-4db80f0c4527" (UID: "b0bfaf6b-2c74-4812-965a-4db80f0c4527"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.640395 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3c1e4d-637e-4de6-aa37-7daff5298b30-logs" (OuterVolumeSpecName: "logs") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.712633 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0bfaf6b-2c74-4812-965a-4db80f0c4527-kube-api-access-67wq4" (OuterVolumeSpecName: "kube-api-access-67wq4") pod "b0bfaf6b-2c74-4812-965a-4db80f0c4527" (UID: "b0bfaf6b-2c74-4812-965a-4db80f0c4527"). InnerVolumeSpecName "kube-api-access-67wq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.713311 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.774682 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" event={"ID":"fcc0e969-75e0-4441-a805-7845261f1ad5","Type":"ContainerDied","Data":"6d2eca1ee21c2e58f6c5ebc2fd659f0e3b36f17ff8d88938be99b51b5573272e"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.774741 4804 scope.go:117] "RemoveContainer" containerID="b44257342c1561f0cc777c6fe14a814950eb25277bf5e95e9adf49e4a763d6fa" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.775071 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.787322 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67wq4\" (UniqueName: \"kubernetes.io/projected/b0bfaf6b-2c74-4812-965a-4db80f0c4527-kube-api-access-67wq4\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.787360 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3c1e4d-637e-4de6-aa37-7daff5298b30-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.787381 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.787391 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0bfaf6b-2c74-4812-965a-4db80f0c4527-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.798357 4804 generic.go:334] "Generic (PLEG): container finished" podID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerID="5fce3701f770e3f1d822ae3950ad420da6c9b44d0df68cf4bd4c8ebf86d62649" exitCode=0 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.798468 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae0fb199-797a-40c6-8c71-3b5a976b6c61","Type":"ContainerDied","Data":"5fce3701f770e3f1d822ae3950ad420da6c9b44d0df68cf4bd4c8ebf86d62649"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.814971 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659f7cffd6-wm9cj" event={"ID":"280cd1a0-6761-425c-8de1-bec2307ba0c0","Type":"ContainerDied","Data":"326e140f9daa666bf3c0b563922935205ab7fc5dba38cc45fd96d0a13dcbd798"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.815115 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.821512 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8cb48af9-edd2-404a-9d56-afedbfa79f07","Type":"ContainerDied","Data":"c41ec5eb61e29312ebbde6dd9b201b0e68fdaaa8fb1724740ba107ac19157740"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.821663 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.842476 4804 generic.go:334] "Generic (PLEG): container finished" podID="5198da96-d6b6-4b80-bb93-838dff10730e" containerID="ea5cc70522b8b244db30a0a1dd5bc4353ad8899e579dd2e9b1384915ac35e91e" exitCode=0 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.842714 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5198da96-d6b6-4b80-bb93-838dff10730e","Type":"ContainerDied","Data":"ea5cc70522b8b244db30a0a1dd5bc4353ad8899e579dd2e9b1384915ac35e91e"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.847144 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.854651 4804 generic.go:334] "Generic (PLEG): container finished" podID="878daeff-34bf-4dab-8118-e42c318849bb" containerID="1144f29504fe6195fc342eb320a4b830871f3e9d0216c4ce9fc167121dce473e" exitCode=0 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.854734 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f675b957-rm9qp" event={"ID":"878daeff-34bf-4dab-8118-e42c318849bb","Type":"ContainerDied","Data":"1144f29504fe6195fc342eb320a4b830871f3e9d0216c4ce9fc167121dce473e"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.861600 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3c1e4d-637e-4de6-aa37-7daff5298b30-kube-api-access-qkk99" (OuterVolumeSpecName: "kube-api-access-qkk99") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "kube-api-access-qkk99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.867047 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f8f4-account-create-update-mg2gd"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.868150 4804 generic.go:334] "Generic (PLEG): container finished" podID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerID="8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e" exitCode=0 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.868232 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" event={"ID":"bb3c1e4d-637e-4de6-aa37-7daff5298b30","Type":"ContainerDied","Data":"8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.868261 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" event={"ID":"bb3c1e4d-637e-4de6-aa37-7daff5298b30","Type":"ContainerDied","Data":"545d9d7c89cb4fb5f1b3a7bdef9c710109a6c3aca89e779fe23e0a1c510a7627"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.868347 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.873292 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f8f4-account-create-update-mg2gd"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.879862 4804 generic.go:334] "Generic (PLEG): container finished" podID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerID="f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca" exitCode=0 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.880083 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0bfaf6b-2c74-4812-965a-4db80f0c4527","Type":"ContainerDied","Data":"f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.880155 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0bfaf6b-2c74-4812-965a-4db80f0c4527","Type":"ContainerDied","Data":"dde0b061f1847f788c0ad04e0fb5557d71997e0c0bf63a89d091b0cead6b787e"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.880470 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="d47089ce-8b52-4bd3-a30e-04736fed01fc" containerName="memcached" containerID="cri-o://386c42bab4089fa2791b36fa5e66b769af00f0ef8e73fa961d1e6e6f38f01759" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.880934 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.889267 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f8f4-account-create-update-xlhb4"] Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.889990 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890014 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890030 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="probe" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890036 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="probe" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890064 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890071 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890085 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b390f543-98da-46ea-b3b9-f68c09d94c03" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890091 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b390f543-98da-46ea-b3b9-f68c09d94c03" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890105 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890111 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890114 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkk99\" (UniqueName: \"kubernetes.io/projected/bb3c1e4d-637e-4de6-aa37-7daff5298b30-kube-api-access-qkk99\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890127 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890135 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api-log" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890144 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerName="init" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890151 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerName="init" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890182 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-server" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890188 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-server" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890205 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="ovsdbserver-nb" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890212 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="ovsdbserver-nb" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890229 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890235 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890256 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="ovsdbserver-sb" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890262 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="ovsdbserver-sb" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890278 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890284 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api-log" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890302 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-httpd" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890308 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-httpd" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890321 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-metadata" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890328 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-metadata" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890353 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerName="galera" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890363 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerName="galera" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890382 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="cinder-scheduler" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890388 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="cinder-scheduler" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890407 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890413 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-log" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890425 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-api" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890431 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-api" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890443 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerName="mysql-bootstrap" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890449 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerName="mysql-bootstrap" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890456 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7359aec-58b3-4254-8765-cdc131e5f912" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890464 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7359aec-58b3-4254-8765-cdc131e5f912" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890477 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb48af9-edd2-404a-9d56-afedbfa79f07" containerName="nova-cell0-conductor-conductor" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890483 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb48af9-edd2-404a-9d56-afedbfa79f07" containerName="nova-cell0-conductor-conductor" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890493 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890512 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890522 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerName="dnsmasq-dns" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890528 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerName="dnsmasq-dns" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890818 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890833 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890840 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890855 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-metadata" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890862 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7359aec-58b3-4254-8765-cdc131e5f912" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890874 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb48af9-edd2-404a-9d56-afedbfa79f07" containerName="nova-cell0-conductor-conductor" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890903 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerName="dnsmasq-dns" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890914 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890928 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="ovsdbserver-nb" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890945 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="cinder-scheduler" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890956 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-api" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890963 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b390f543-98da-46ea-b3b9-f68c09d94c03" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890976 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="probe" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891020 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-httpd" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891037 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="ovsdbserver-sb" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891047 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891058 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891073 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerName="galera" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891083 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891096 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-server" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891110 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.898036 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.901132 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.912940 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f8f4-account-create-update-xlhb4"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.930456 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qmm7h"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.935216 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-5r69w"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.945484 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-5r69w"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.951566 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qmm7h"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.957922 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6f885d959c-vhjh4"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.958193 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6f885d959c-vhjh4" podUID="4efe85dc-b64c-4cbe-83f7-89fa462a95a0" containerName="keystone-api" containerID="cri-o://31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.964730 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.970060 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5t7jn"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.991258 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv7qb\" (UniqueName: \"kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.991399 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.993066 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5t7jn"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.001373 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f8f4-account-create-update-xlhb4"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.005142 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0bfaf6b-2c74-4812-965a-4db80f0c4527" (UID: "b0bfaf6b-2c74-4812-965a-4db80f0c4527"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.012309 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.024814 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jqk9s"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.032949 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-config-data" (OuterVolumeSpecName: "config-data") pod "b0bfaf6b-2c74-4812-965a-4db80f0c4527" (UID: "b0bfaf6b-2c74-4812-965a-4db80f0c4527"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.077369 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.077653 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.077696 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.087451 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-config-data" (OuterVolumeSpecName: "config-data") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.087447 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.087493 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.089676 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6af777f5_5dfc_4f4d_b7c5_dd0de3f80def.slice/crio-ccddc2c43c4ec70519371e0d1f04a70d45a5a33973b05eef166b2e2189b30710.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90f5a2ef_6224_4af8_8bba_32c689a960f1.slice/crio-conmon-4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f5cdaa9_8b1d_44b2_bfe6_d986f680327f.slice/crio-9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63.scope\": RecentStats: unable to find data in memory cache]" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.089806 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.089855 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.090209 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.090240 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.099658 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7qb\" (UniqueName: \"kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.105754 4804 projected.go:194] Error preparing data for projected volume kube-api-access-fv7qb for pod openstack/keystone-f8f4-account-create-update-xlhb4: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.105843 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb podName:e8eac10f-27a6-4229-9281-ead753bf852d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:38.605820022 +0000 UTC m=+1414.400700006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fv7qb" (UniqueName: "kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb") pod "keystone-f8f4-account-create-update-xlhb4" (UID: "e8eac10f-27a6-4229-9281-ead753bf852d") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.106358 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.106563 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.106583 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.106596 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.106609 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.106621 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.106696 4804 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.106750 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts podName:e8eac10f-27a6-4229-9281-ead753bf852d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:38.606732171 +0000 UTC m=+1414.401612165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts") pod "keystone-f8f4-account-create-update-xlhb4" (UID: "e8eac10f-27a6-4229-9281-ead753bf852d") : configmap "openstack-scripts" not found Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.121227 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data" (OuterVolumeSpecName: "config-data") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.140991 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.142047 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.162965 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b0bfaf6b-2c74-4812-965a-4db80f0c4527" (UID: "b0bfaf6b-2c74-4812-965a-4db80f0c4527"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.171018 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.171143 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.217093 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.217538 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.217553 4804 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.217567 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.217682 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.217701 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.270592 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="galera" containerID="cri-o://5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" gracePeriod=30 Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.277294 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.307403 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-fv7qb operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-f8f4-account-create-update-xlhb4" podUID="e8eac10f-27a6-4229-9281-ead753bf852d" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.322320 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.329093 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.330752 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.331974 4804 scope.go:117] "RemoveContainer" containerID="6e4df9959650dab13d28cc4f6579b5bbae4ec71560a9da89f85150e891a84a60" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.349756 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.360078 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.372452 4804 scope.go:117] "RemoveContainer" containerID="2bc2f4bef5b6e11721d8eabaa519e6625f7ff953fd015c6be0cebef1e6ec65fa" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.391820 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.417289 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-659f7cffd6-wm9cj"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.422872 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-internal-tls-certs\") pod \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.422930 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-public-tls-certs\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.422951 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data-custom\") pod \"878daeff-34bf-4dab-8118-e42c318849bb\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.422997 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jlrd\" (UniqueName: \"kubernetes.io/projected/878daeff-34bf-4dab-8118-e42c318849bb-kube-api-access-4jlrd\") pod \"878daeff-34bf-4dab-8118-e42c318849bb\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423090 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878daeff-34bf-4dab-8118-e42c318849bb-logs\") pod \"878daeff-34bf-4dab-8118-e42c318849bb\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423136 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-combined-ca-bundle\") pod \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423157 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-public-tls-certs\") pod \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423275 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data\") pod \"878daeff-34bf-4dab-8118-e42c318849bb\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423308 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-config-data\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423326 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-combined-ca-bundle\") pod \"878daeff-34bf-4dab-8118-e42c318849bb\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423345 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-config-data\") pod \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423368 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-logs\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423388 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0fb199-797a-40c6-8c71-3b5a976b6c61-logs\") pod \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423431 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-combined-ca-bundle\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423477 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qshqj\" (UniqueName: \"kubernetes.io/projected/5198da96-d6b6-4b80-bb93-838dff10730e-kube-api-access-qshqj\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423501 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qm8l\" (UniqueName: \"kubernetes.io/projected/ae0fb199-797a-40c6-8c71-3b5a976b6c61-kube-api-access-4qm8l\") pod \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423519 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-httpd-run\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423533 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423557 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-scripts\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.426625 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-659f7cffd6-wm9cj"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.430237 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0fb199-797a-40c6-8c71-3b5a976b6c61-kube-api-access-4qm8l" (OuterVolumeSpecName: "kube-api-access-4qm8l") pod "ae0fb199-797a-40c6-8c71-3b5a976b6c61" (UID: "ae0fb199-797a-40c6-8c71-3b5a976b6c61"). InnerVolumeSpecName "kube-api-access-4qm8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.434503 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "878daeff-34bf-4dab-8118-e42c318849bb" (UID: "878daeff-34bf-4dab-8118-e42c318849bb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.436998 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.440102 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878daeff-34bf-4dab-8118-e42c318849bb-kube-api-access-4jlrd" (OuterVolumeSpecName: "kube-api-access-4jlrd") pod "878daeff-34bf-4dab-8118-e42c318849bb" (UID: "878daeff-34bf-4dab-8118-e42c318849bb"). InnerVolumeSpecName "kube-api-access-4jlrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.455260 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5198da96-d6b6-4b80-bb93-838dff10730e-kube-api-access-qshqj" (OuterVolumeSpecName: "kube-api-access-qshqj") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "kube-api-access-qshqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.455870 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878daeff-34bf-4dab-8118-e42c318849bb-logs" (OuterVolumeSpecName: "logs") pod "878daeff-34bf-4dab-8118-e42c318849bb" (UID: "878daeff-34bf-4dab-8118-e42c318849bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.455963 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae0fb199-797a-40c6-8c71-3b5a976b6c61-logs" (OuterVolumeSpecName: "logs") pod "ae0fb199-797a-40c6-8c71-3b5a976b6c61" (UID: "ae0fb199-797a-40c6-8c71-3b5a976b6c61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.456594 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.457187 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-scripts" (OuterVolumeSpecName: "scripts") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.458110 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-logs" (OuterVolumeSpecName: "logs") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.460280 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.474563 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.492436 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae0fb199-797a-40c6-8c71-3b5a976b6c61" (UID: "ae0fb199-797a-40c6-8c71-3b5a976b6c61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.503243 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "878daeff-34bf-4dab-8118-e42c318849bb" (UID: "878daeff-34bf-4dab-8118-e42c318849bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.508956 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-59fb5cbd47-wwqmq"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.519379 4804 scope.go:117] "RemoveContainer" containerID="54143a992b19966c4a0488e9860a42a6b4166527e948b9a61cc651fb19353896" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.524459 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-operator-scripts\") pod \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.524738 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pcnf\" (UniqueName: \"kubernetes.io/projected/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-kube-api-access-7pcnf\") pod \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525120 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qshqj\" (UniqueName: \"kubernetes.io/projected/5198da96-d6b6-4b80-bb93-838dff10730e-kube-api-access-qshqj\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525132 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qm8l\" (UniqueName: \"kubernetes.io/projected/ae0fb199-797a-40c6-8c71-3b5a976b6c61-kube-api-access-4qm8l\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525140 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525160 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525169 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525178 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525186 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jlrd\" (UniqueName: \"kubernetes.io/projected/878daeff-34bf-4dab-8118-e42c318849bb-kube-api-access-4jlrd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525194 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878daeff-34bf-4dab-8118-e42c318849bb-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525201 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525210 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525217 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525225 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0fb199-797a-40c6-8c71-3b5a976b6c61-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525233 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525283 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" (UID: "be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.529058 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-59fb5cbd47-wwqmq"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.545513 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bd5b5bf44-5z4wx"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.550324 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7bd5b5bf44-5z4wx"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.558191 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-kube-api-access-7pcnf" (OuterVolumeSpecName: "kube-api-access-7pcnf") pod "be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" (UID: "be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8"). InnerVolumeSpecName "kube-api-access-7pcnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.588338 4804 scope.go:117] "RemoveContainer" containerID="fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.605087 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-config-data" (OuterVolumeSpecName: "config-data") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.612316 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ae0fb199-797a-40c6-8c71-3b5a976b6c61" (UID: "ae0fb199-797a-40c6-8c71-3b5a976b6c61"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.634390 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7qb\" (UniqueName: \"kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.634617 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.634768 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.634784 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.634796 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.634806 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pcnf\" (UniqueName: \"kubernetes.io/projected/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-kube-api-access-7pcnf\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.634933 4804 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.634990 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts podName:e8eac10f-27a6-4229-9281-ead753bf852d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:39.634973078 +0000 UTC m=+1415.429853062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts") pod "keystone-f8f4-account-create-update-xlhb4" (UID: "e8eac10f-27a6-4229-9281-ead753bf852d") : configmap "openstack-scripts" not found Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.640371 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.643120 4804 projected.go:194] Error preparing data for projected volume kube-api-access-fv7qb for pod openstack/keystone-f8f4-account-create-update-xlhb4: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.643187 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb podName:e8eac10f-27a6-4229-9281-ead753bf852d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:39.643166033 +0000 UTC m=+1415.438046017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fv7qb" (UniqueName: "kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb") pod "keystone-f8f4-account-create-update-xlhb4" (UID: "e8eac10f-27a6-4229-9281-ead753bf852d") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.643747 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data" (OuterVolumeSpecName: "config-data") pod "878daeff-34bf-4dab-8118-e42c318849bb" (UID: "878daeff-34bf-4dab-8118-e42c318849bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.649796 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ae0fb199-797a-40c6-8c71-3b5a976b6c61" (UID: "ae0fb199-797a-40c6-8c71-3b5a976b6c61"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.656109 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.666023 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-config-data" (OuterVolumeSpecName: "config-data") pod "ae0fb199-797a-40c6-8c71-3b5a976b6c61" (UID: "ae0fb199-797a-40c6-8c71-3b5a976b6c61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.688528 4804 scope.go:117] "RemoveContainer" containerID="8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.717490 4804 scope.go:117] "RemoveContainer" containerID="55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.735977 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.736006 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.736019 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.736034 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.736043 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.735975 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.736140 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data podName:76d127f1-97d9-4552-9bdb-b3482a45951d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:46.736124141 +0000 UTC m=+1422.531004125 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data") pod "rabbitmq-cell1-server-0" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d") : configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.753539 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xtdr8" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.767681 4804 scope.go:117] "RemoveContainer" containerID="8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.774556 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e\": container with ID starting with 8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e not found: ID does not exist" containerID="8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.774605 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e"} err="failed to get container status \"8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e\": rpc error: code = NotFound desc = could not find container \"8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e\": container with ID starting with 8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e not found: ID does not exist" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.774635 4804 scope.go:117] "RemoveContainer" containerID="55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.775262 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389\": container with ID starting with 55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389 not found: ID does not exist" containerID="55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.775318 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389"} err="failed to get container status \"55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389\": rpc error: code = NotFound desc = could not find container \"55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389\": container with ID starting with 55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389 not found: ID does not exist" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.775337 4804 scope.go:117] "RemoveContainer" containerID="f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.778280 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.817464 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xtdr8" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" probeResult="failure" output=< Jan 28 11:45:38 crc kubenswrapper[4804]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 28 11:45:38 crc kubenswrapper[4804]: > Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.819305 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.821327 4804 scope.go:117] "RemoveContainer" containerID="5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.821627 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.823375 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.823412 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" containerName="nova-cell1-conductor-conductor" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.829451 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="d47089ce-8b52-4bd3-a30e-04736fed01fc" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.105:11211: connect: connection refused" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837193 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-logs\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837306 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-combined-ca-bundle\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837342 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbx7g\" (UniqueName: \"kubernetes.io/projected/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-kube-api-access-qbx7g\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837402 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837448 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837528 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837555 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-httpd-run\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837576 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-internal-tls-certs\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.848242 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.855251 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-logs" (OuterVolumeSpecName: "logs") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.855415 4804 scope.go:117] "RemoveContainer" containerID="f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.859284 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca\": container with ID starting with f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca not found: ID does not exist" containerID="f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.859320 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca"} err="failed to get container status \"f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca\": rpc error: code = NotFound desc = could not find container \"f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca\": container with ID starting with f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca not found: ID does not exist" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.859346 4804 scope.go:117] "RemoveContainer" containerID="5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.860282 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06\": container with ID starting with 5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06 not found: ID does not exist" containerID="5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.860308 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06"} err="failed to get container status \"5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06\": rpc error: code = NotFound desc = could not find container \"5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06\": container with ID starting with 5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06 not found: ID does not exist" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.886154 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.892814 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-kube-api-access-qbx7g" (OuterVolumeSpecName: "kube-api-access-qbx7g") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "kube-api-access-qbx7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.902631 4804 generic.go:334] "Generic (PLEG): container finished" podID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerID="9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63" exitCode=0 Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.903107 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts" (OuterVolumeSpecName: "scripts") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.903122 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f","Type":"ContainerDied","Data":"9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63"} Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.903164 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.903195 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f","Type":"ContainerDied","Data":"13f3f152dac9edae9ea4638a3a8d8a972d428663034fabf17665286ff2611f13"} Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.903221 4804 scope.go:117] "RemoveContainer" containerID="9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.906321 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.909201 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.910377 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jqk9s" event={"ID":"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8","Type":"ContainerDied","Data":"59c191ec61924ab2b5f8fefd52ae2f9680b75391edc58ed63a1c1c209e71f63c"} Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.910639 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data" (OuterVolumeSpecName: "config-data") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.938349 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940683 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940710 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940721 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbx7g\" (UniqueName: \"kubernetes.io/projected/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-kube-api-access-qbx7g\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940730 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940764 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940773 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940784 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.965799 4804 scope.go:117] "RemoveContainer" containerID="53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.986254 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.990508 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.996788 4804 generic.go:334] "Generic (PLEG): container finished" podID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerID="1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c" exitCode=0 Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.997044 4804 generic.go:334] "Generic (PLEG): container finished" podID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerID="4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4" exitCode=2 Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.997121 4804 generic.go:334] "Generic (PLEG): container finished" podID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerID="e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4" exitCode=0 Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.999734 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.002127 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" path="/var/lib/kubelet/pods/280cd1a0-6761-425c-8de1-bec2307ba0c0/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.010521 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54fa6273-e08e-4dbb-a86b-a8951e4100fa" path="/var/lib/kubelet/pods/54fa6273-e08e-4dbb-a86b-a8951e4100fa/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.011147 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79faecc7-1388-420a-9eee-b47d0ce87f34" path="/var/lib/kubelet/pods/79faecc7-1388-420a-9eee-b47d0ce87f34/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.011680 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8686dbae-d7dd-4662-81a8-ab51cc85a115" path="/var/lib/kubelet/pods/8686dbae-d7dd-4662-81a8-ab51cc85a115/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.012168 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb48af9-edd2-404a-9d56-afedbfa79f07" path="/var/lib/kubelet/pods/8cb48af9-edd2-404a-9d56-afedbfa79f07/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.019366 4804 generic.go:334] "Generic (PLEG): container finished" podID="d47089ce-8b52-4bd3-a30e-04736fed01fc" containerID="386c42bab4089fa2791b36fa5e66b769af00f0ef8e73fa961d1e6e6f38f01759" exitCode=0 Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.019475 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4586997-59ed-4e13-b7ec-3146711f642c" path="/var/lib/kubelet/pods/a4586997-59ed-4e13-b7ec-3146711f642c/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.020195 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" path="/var/lib/kubelet/pods/b0bfaf6b-2c74-4812-965a-4db80f0c4527/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.020798 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" path="/var/lib/kubelet/pods/bb3c1e4d-637e-4de6-aa37-7daff5298b30/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.029073 4804 generic.go:334] "Generic (PLEG): container finished" podID="6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" containerID="ccddc2c43c4ec70519371e0d1f04a70d45a5a33973b05eef166b2e2189b30710" exitCode=2 Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.031131 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" path="/var/lib/kubelet/pods/fcc0e969-75e0-4441-a805-7845261f1ad5/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.045018 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046586 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5198da96-d6b6-4b80-bb93-838dff10730e","Type":"ContainerDied","Data":"58bcb13d20697d7aea6b95393a84cbc41032eeedc92a545190a9ec6f060f3919"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046622 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae0fb199-797a-40c6-8c71-3b5a976b6c61","Type":"ContainerDied","Data":"b68d102d0c8eb5fad0a64accd8eecc6866ce24c76046948f3cee122445962edf"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046637 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerDied","Data":"1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046649 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerDied","Data":"4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046659 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerDied","Data":"e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046669 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f675b957-rm9qp" event={"ID":"878daeff-34bf-4dab-8118-e42c318849bb","Type":"ContainerDied","Data":"a5da49e2449f72f707c273e1370e4a7b62de12d82629f0770fb413435e71898d"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046681 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d47089ce-8b52-4bd3-a30e-04736fed01fc","Type":"ContainerDied","Data":"386c42bab4089fa2791b36fa5e66b769af00f0ef8e73fa961d1e6e6f38f01759"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046693 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def","Type":"ContainerDied","Data":"ccddc2c43c4ec70519371e0d1f04a70d45a5a33973b05eef166b2e2189b30710"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.047460 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.048856 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.048896 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.153476 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.153827 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data podName:f7c5c969-c4c2-4f76-b3c6-152473159e78 nodeName:}" failed. No retries permitted until 2026-01-28 11:45:47.153810882 +0000 UTC m=+1422.948690866 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data") pod "rabbitmq-server-0" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78") : configmap "rabbitmq-config-data" not found Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.191244 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.198361 4804 scope.go:117] "RemoveContainer" containerID="9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63" Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.198794 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63\": container with ID starting with 9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63 not found: ID does not exist" containerID="9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.198957 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63"} err="failed to get container status \"9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63\": rpc error: code = NotFound desc = could not find container \"9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63\": container with ID starting with 9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63 not found: ID does not exist" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.199113 4804 scope.go:117] "RemoveContainer" containerID="53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001" Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.199821 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001\": container with ID starting with 53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001 not found: ID does not exist" containerID="53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.199855 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001"} err="failed to get container status \"53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001\": rpc error: code = NotFound desc = could not find container \"53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001\": container with ID starting with 53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001 not found: ID does not exist" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.199893 4804 scope.go:117] "RemoveContainer" containerID="d91ad709364755c2f101045dffe047be8b4a0f8b3fefadd5603f62974e04e888" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.210215 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.235002 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.253542 4804 scope.go:117] "RemoveContainer" containerID="ea5cc70522b8b244db30a0a1dd5bc4353ad8899e579dd2e9b1384915ac35e91e" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.298075 4804 scope.go:117] "RemoveContainer" containerID="49f9909506d8a2c0b51ffca97a1e1ce6efc0b0acde0bbf32f3a77e33e0c7d096" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.313023 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jqk9s"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.341260 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jqk9s"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361071 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-combined-ca-bundle\") pod \"d47089ce-8b52-4bd3-a30e-04736fed01fc\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361199 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-config-data\") pod \"d47089ce-8b52-4bd3-a30e-04736fed01fc\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361227 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-combined-ca-bundle\") pod \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361279 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-certs\") pod \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361295 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-kolla-config\") pod \"d47089ce-8b52-4bd3-a30e-04736fed01fc\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361325 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-memcached-tls-certs\") pod \"d47089ce-8b52-4bd3-a30e-04736fed01fc\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361353 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-config\") pod \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361379 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6xhn\" (UniqueName: \"kubernetes.io/projected/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-api-access-s6xhn\") pod \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361397 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flxcf\" (UniqueName: \"kubernetes.io/projected/d47089ce-8b52-4bd3-a30e-04736fed01fc-kube-api-access-flxcf\") pod \"d47089ce-8b52-4bd3-a30e-04736fed01fc\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361840 4804 scope.go:117] "RemoveContainer" containerID="5fce3701f770e3f1d822ae3950ad420da6c9b44d0df68cf4bd4c8ebf86d62649" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.363588 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "d47089ce-8b52-4bd3-a30e-04736fed01fc" (UID: "d47089ce-8b52-4bd3-a30e-04736fed01fc"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.363965 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-config-data" (OuterVolumeSpecName: "config-data") pod "d47089ce-8b52-4bd3-a30e-04736fed01fc" (UID: "d47089ce-8b52-4bd3-a30e-04736fed01fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.369514 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d47089ce-8b52-4bd3-a30e-04736fed01fc-kube-api-access-flxcf" (OuterVolumeSpecName: "kube-api-access-flxcf") pod "d47089ce-8b52-4bd3-a30e-04736fed01fc" (UID: "d47089ce-8b52-4bd3-a30e-04736fed01fc"). InnerVolumeSpecName "kube-api-access-flxcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.369737 4804 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.369758 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flxcf\" (UniqueName: \"kubernetes.io/projected/d47089ce-8b52-4bd3-a30e-04736fed01fc-kube-api-access-flxcf\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.369769 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.371861 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-api-access-s6xhn" (OuterVolumeSpecName: "kube-api-access-s6xhn") pod "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" (UID: "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def"). InnerVolumeSpecName "kube-api-access-s6xhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.393120 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.419525 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" (UID: "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.420541 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.420993 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "d47089ce-8b52-4bd3-a30e-04736fed01fc" (UID: "d47089ce-8b52-4bd3-a30e-04736fed01fc"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.432438 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.442357 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.449180 4804 scope.go:117] "RemoveContainer" containerID="61e2afaf3a01a165673d5c42d38ef739fe857b1c1e03f62547614151ae809226" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.449329 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.451430 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d47089ce-8b52-4bd3-a30e-04736fed01fc" (UID: "d47089ce-8b52-4bd3-a30e-04736fed01fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.457437 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.461426 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" (UID: "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.466410 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-8f675b957-rm9qp"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.468161 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-8f675b957-rm9qp"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.470648 4804 scope.go:117] "RemoveContainer" containerID="1144f29504fe6195fc342eb320a4b830871f3e9d0216c4ce9fc167121dce473e" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.471616 4804 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.471637 4804 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.471649 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6xhn\" (UniqueName: \"kubernetes.io/projected/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-api-access-s6xhn\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.471661 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.471670 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.490491 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" (UID: "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.506940 4804 scope.go:117] "RemoveContainer" containerID="f4726c69a403b9a8eefc4f17886ef00a383e10ea26adf572bdfed7ea1d3723a8" Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.511801 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.512852 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.519255 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.519503 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="469a0049-480f-4cde-848d-4b11cb54130b" containerName="nova-scheduler-scheduler" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.573413 4804 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.674942 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7qb\" (UniqueName: \"kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.675075 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.675179 4804 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.675229 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts podName:e8eac10f-27a6-4229-9281-ead753bf852d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:41.675214346 +0000 UTC m=+1417.470094330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts") pod "keystone-f8f4-account-create-update-xlhb4" (UID: "e8eac10f-27a6-4229-9281-ead753bf852d") : configmap "openstack-scripts" not found Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.680096 4804 projected.go:194] Error preparing data for projected volume kube-api-access-fv7qb for pod openstack/keystone-f8f4-account-create-update-xlhb4: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.680171 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb podName:e8eac10f-27a6-4229-9281-ead753bf852d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:41.68015201 +0000 UTC m=+1417.475031994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fv7qb" (UniqueName: "kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb") pod "keystone-f8f4-account-create-update-xlhb4" (UID: "e8eac10f-27a6-4229-9281-ead753bf852d") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.079546 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d47089ce-8b52-4bd3-a30e-04736fed01fc","Type":"ContainerDied","Data":"86818d705a40c4508845f5e3530cd1a2ecd08917ac1287e69fd364a076602c00"} Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.079842 4804 scope.go:117] "RemoveContainer" containerID="386c42bab4089fa2791b36fa5e66b769af00f0ef8e73fa961d1e6e6f38f01759" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.079581 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.083834 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.084957 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def","Type":"ContainerDied","Data":"21e20525ca7a6c58cab2832c14cfe80c2d4514f39f84f4eb3108c5f05572b1bf"} Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.112928 4804 generic.go:334] "Generic (PLEG): container finished" podID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerID="a7bcd4c4937ab18a41cb4959a39743e78382843e721b78db4c0a6c20de518e0c" exitCode=0 Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.112995 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76d127f1-97d9-4552-9bdb-b3482a45951d","Type":"ContainerDied","Data":"a7bcd4c4937ab18a41cb4959a39743e78382843e721b78db4c0a6c20de518e0c"} Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.138460 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.142974 4804 scope.go:117] "RemoveContainer" containerID="ccddc2c43c4ec70519371e0d1f04a70d45a5a33973b05eef166b2e2189b30710" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.145845 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_edcdd787-6628-49ee-abcf-0146c096f547/ovn-northd/0.log" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.145899 4804 generic.go:334] "Generic (PLEG): container finished" podID="edcdd787-6628-49ee-abcf-0146c096f547" containerID="1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00" exitCode=139 Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.145955 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcdd787-6628-49ee-abcf-0146c096f547","Type":"ContainerDied","Data":"1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00"} Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.159773 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.165683 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.167973 4804 generic.go:334] "Generic (PLEG): container finished" podID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerID="95dfda03211e6c344c512015a17826e376bdb3ad7fb59bc5821bb495def03e2b" exitCode=0 Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.168044 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.168039 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7c5c969-c4c2-4f76-b3c6-152473159e78","Type":"ContainerDied","Data":"95dfda03211e6c344c512015a17826e376bdb3ad7fb59bc5821bb495def03e2b"} Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.171142 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.222765 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f8f4-account-create-update-xlhb4"] Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.227392 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f8f4-account-create-update-xlhb4"] Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.283685 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.283713 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv7qb\" (UniqueName: \"kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.427261 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488001 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-plugins-conf\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488245 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-plugins\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488304 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488345 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76d127f1-97d9-4552-9bdb-b3482a45951d-erlang-cookie-secret\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488408 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488441 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-tls\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488470 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-erlang-cookie\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488512 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76d127f1-97d9-4552-9bdb-b3482a45951d-pod-info\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488582 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzs9j\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-kube-api-access-gzs9j\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488728 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-server-conf\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488807 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.491247 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.491342 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.491872 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.504177 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d127f1-97d9-4552-9bdb-b3482a45951d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.504305 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.504728 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-kube-api-access-gzs9j" (OuterVolumeSpecName: "kube-api-access-gzs9j") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "kube-api-access-gzs9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.504897 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.527088 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/76d127f1-97d9-4552-9bdb-b3482a45951d-pod-info" (OuterVolumeSpecName: "pod-info") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.527929 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data" (OuterVolumeSpecName: "config-data") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.565717 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-server-conf" (OuterVolumeSpecName: "server-conf") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.589635 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590103 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590576 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590596 4804 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590607 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590617 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590625 4804 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76d127f1-97d9-4552-9bdb-b3482a45951d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590635 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590644 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590652 4804 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76d127f1-97d9-4552-9bdb-b3482a45951d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590663 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzs9j\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-kube-api-access-gzs9j\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590671 4804 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: W0128 11:45:40.590998 4804 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/76d127f1-97d9-4552-9bdb-b3482a45951d/volumes/kubernetes.io~projected/rabbitmq-confd Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.591019 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.606975 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.634650 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.653387 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_edcdd787-6628-49ee-abcf-0146c096f547/ovn-northd/0.log" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.653472 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.700414 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7c5c969-c4c2-4f76-b3c6-152473159e78-pod-info\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701015 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqhxr\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-kube-api-access-wqhxr\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701041 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-server-conf\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701087 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-plugins\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701110 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701140 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-metrics-certs-tls-certs\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701181 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-ovn-northd-tls-certs\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701202 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-tls\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701242 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jjlz\" (UniqueName: \"kubernetes.io/projected/edcdd787-6628-49ee-abcf-0146c096f547-kube-api-access-6jjlz\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701260 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcdd787-6628-49ee-abcf-0146c096f547-ovn-rundir\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701301 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-erlang-cookie\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701321 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-plugins-conf\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701368 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-combined-ca-bundle\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701386 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7c5c969-c4c2-4f76-b3c6-152473159e78-erlang-cookie-secret\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701401 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-scripts\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701452 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701475 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-confd\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701522 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-config\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701799 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701809 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.704688 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.707832 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.715150 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.715532 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edcdd787-6628-49ee-abcf-0146c096f547-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.716112 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-config" (OuterVolumeSpecName: "config") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.716217 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.716279 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f7c5c969-c4c2-4f76-b3c6-152473159e78-pod-info" (OuterVolumeSpecName: "pod-info") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.719173 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.729642 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-kube-api-access-wqhxr" (OuterVolumeSpecName: "kube-api-access-wqhxr") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "kube-api-access-wqhxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.732160 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c5c969-c4c2-4f76-b3c6-152473159e78-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.737234 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-scripts" (OuterVolumeSpecName: "scripts") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.739617 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data" (OuterVolumeSpecName: "config-data") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.763083 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edcdd787-6628-49ee-abcf-0146c096f547-kube-api-access-6jjlz" (OuterVolumeSpecName: "kube-api-access-6jjlz") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "kube-api-access-6jjlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.777101 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-server-conf" (OuterVolumeSpecName: "server-conf") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.780249 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804086 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804128 4804 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7c5c969-c4c2-4f76-b3c6-152473159e78-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804140 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqhxr\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-kube-api-access-wqhxr\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804151 4804 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804160 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804191 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804200 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804208 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jjlz\" (UniqueName: \"kubernetes.io/projected/edcdd787-6628-49ee-abcf-0146c096f547-kube-api-access-6jjlz\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804216 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcdd787-6628-49ee-abcf-0146c096f547-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804226 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804234 4804 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804241 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804250 4804 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7c5c969-c4c2-4f76-b3c6-152473159e78-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804258 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804266 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.808501 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.820428 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.821589 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.832076 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.905824 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.905874 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.905904 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.905917 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.923508 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" path="/var/lib/kubelet/pods/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.924599 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" path="/var/lib/kubelet/pods/5198da96-d6b6-4b80-bb93-838dff10730e/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.925415 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" path="/var/lib/kubelet/pods/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.926653 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878daeff-34bf-4dab-8118-e42c318849bb" path="/var/lib/kubelet/pods/878daeff-34bf-4dab-8118-e42c318849bb/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.927445 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" path="/var/lib/kubelet/pods/ae0fb199-797a-40c6-8c71-3b5a976b6c61/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.928113 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" path="/var/lib/kubelet/pods/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.929354 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d47089ce-8b52-4bd3-a30e-04736fed01fc" path="/var/lib/kubelet/pods/d47089ce-8b52-4bd3-a30e-04736fed01fc/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.929801 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8eac10f-27a6-4229-9281-ead753bf852d" path="/var/lib/kubelet/pods/e8eac10f-27a6-4229-9281-ead753bf852d/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: E0128 11:45:40.993624 4804 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 28 11:45:40 crc kubenswrapper[4804]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-28T11:45:33Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 28 11:45:40 crc kubenswrapper[4804]: /etc/init.d/functions: line 589: 414 Alarm clock "$@" Jan 28 11:45:40 crc kubenswrapper[4804]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-xtdr8" message=< Jan 28 11:45:40 crc kubenswrapper[4804]: Exiting ovn-controller (1) [FAILED] Jan 28 11:45:40 crc kubenswrapper[4804]: Killing ovn-controller (1) [ OK ] Jan 28 11:45:40 crc kubenswrapper[4804]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 28 11:45:40 crc kubenswrapper[4804]: 2026-01-28T11:45:33Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 28 11:45:40 crc kubenswrapper[4804]: /etc/init.d/functions: line 589: 414 Alarm clock "$@" Jan 28 11:45:40 crc kubenswrapper[4804]: > Jan 28 11:45:40 crc kubenswrapper[4804]: E0128 11:45:40.993663 4804 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 28 11:45:40 crc kubenswrapper[4804]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-28T11:45:33Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 28 11:45:40 crc kubenswrapper[4804]: /etc/init.d/functions: line 589: 414 Alarm clock "$@" Jan 28 11:45:40 crc kubenswrapper[4804]: > pod="openstack/ovn-controller-xtdr8" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" containerID="cri-o://4a2eea6008d67570b3d18ca463796d41c0886d498dab2d5b7ee01d2e5f0bd61d" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.993696 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-xtdr8" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" containerID="cri-o://4a2eea6008d67570b3d18ca463796d41c0886d498dab2d5b7ee01d2e5f0bd61d" gracePeriod=20 Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.193927 4804 generic.go:334] "Generic (PLEG): container finished" podID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerID="bcbdcf39ea5a39e34418c6ab9208339d9f7fde2eca3c37cbb5806710252cf88b" exitCode=0 Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.194018 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" event={"ID":"82ef8b43-de59-45f8-9c2a-765c5709054b","Type":"ContainerDied","Data":"bcbdcf39ea5a39e34418c6ab9208339d9f7fde2eca3c37cbb5806710252cf88b"} Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.200286 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76d127f1-97d9-4552-9bdb-b3482a45951d","Type":"ContainerDied","Data":"304507b474cdd7086e7df033bc16291530ac6b5f55a2e85e565b86562e7fde59"} Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.200348 4804 scope.go:117] "RemoveContainer" containerID="a7bcd4c4937ab18a41cb4959a39743e78382843e721b78db4c0a6c20de518e0c" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.200521 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.220946 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_edcdd787-6628-49ee-abcf-0146c096f547/ovn-northd/0.log" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.221031 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcdd787-6628-49ee-abcf-0146c096f547","Type":"ContainerDied","Data":"1c34e1e54f29019381489766526d85a7ed81f51d7a176f0cfb6db1161fa7dad8"} Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.221151 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.228019 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7c5c969-c4c2-4f76-b3c6-152473159e78","Type":"ContainerDied","Data":"a5146612f4e2d80705681617c2e405b8c7dbe80637772da2d39bae9bb807359c"} Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.228108 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.233218 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xtdr8_ec6a5a02-2cbe-421b-bcf5-54572e000f28/ovn-controller/0.log" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.233278 4804 generic.go:334] "Generic (PLEG): container finished" podID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerID="4a2eea6008d67570b3d18ca463796d41c0886d498dab2d5b7ee01d2e5f0bd61d" exitCode=137 Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.233358 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8" event={"ID":"ec6a5a02-2cbe-421b-bcf5-54572e000f28","Type":"ContainerDied","Data":"4a2eea6008d67570b3d18ca463796d41c0886d498dab2d5b7ee01d2e5f0bd61d"} Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.239953 4804 generic.go:334] "Generic (PLEG): container finished" podID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" containerID="87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955" exitCode=0 Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.240031 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8e88e9db-b96d-4009-a4e6-ccbb5be53f85","Type":"ContainerDied","Data":"87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955"} Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.536093 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xtdr8_ec6a5a02-2cbe-421b-bcf5-54572e000f28/ovn-controller/0.log" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.536365 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.551545 4804 scope.go:117] "RemoveContainer" containerID="938917cd0b60c23765326c3b0e216a34a5756c286f26d1223873445f92cad09a" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.560090 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.576839 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.588985 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.594700 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.600428 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.600854 4804 scope.go:117] "RemoveContainer" containerID="17400e5f10254b0d771acc135458ad1381f04acdf3cc5817d31b6d3932b519f1" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626413 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-ovn-controller-tls-certs\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626526 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec6a5a02-2cbe-421b-bcf5-54572e000f28-scripts\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626549 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-combined-ca-bundle\") pod \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626572 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98jpw\" (UniqueName: \"kubernetes.io/projected/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-kube-api-access-98jpw\") pod \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626632 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-config-data\") pod \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626690 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626727 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frdmr\" (UniqueName: \"kubernetes.io/projected/ec6a5a02-2cbe-421b-bcf5-54572e000f28-kube-api-access-frdmr\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626779 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run-ovn\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626814 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-log-ovn\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626842 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-combined-ca-bundle\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.627482 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.627491 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run" (OuterVolumeSpecName: "var-run") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.627531 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.628639 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6a5a02-2cbe-421b-bcf5-54572e000f28-scripts" (OuterVolumeSpecName: "scripts") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.632707 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.632747 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-kube-api-access-98jpw" (OuterVolumeSpecName: "kube-api-access-98jpw") pod "8e88e9db-b96d-4009-a4e6-ccbb5be53f85" (UID: "8e88e9db-b96d-4009-a4e6-ccbb5be53f85"). InnerVolumeSpecName "kube-api-access-98jpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.641430 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6a5a02-2cbe-421b-bcf5-54572e000f28-kube-api-access-frdmr" (OuterVolumeSpecName: "kube-api-access-frdmr") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "kube-api-access-frdmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.648188 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.666377 4804 scope.go:117] "RemoveContainer" containerID="1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.684543 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-config-data" (OuterVolumeSpecName: "config-data") pod "8e88e9db-b96d-4009-a4e6-ccbb5be53f85" (UID: "8e88e9db-b96d-4009-a4e6-ccbb5be53f85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.684594 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.695181 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e88e9db-b96d-4009-a4e6-ccbb5be53f85" (UID: "8e88e9db-b96d-4009-a4e6-ccbb5be53f85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.713608 4804 scope.go:117] "RemoveContainer" containerID="95dfda03211e6c344c512015a17826e376bdb3ad7fb59bc5821bb495def03e2b" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730323 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec6a5a02-2cbe-421b-bcf5-54572e000f28-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730352 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730366 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98jpw\" (UniqueName: \"kubernetes.io/projected/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-kube-api-access-98jpw\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730374 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730383 4804 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730390 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frdmr\" (UniqueName: \"kubernetes.io/projected/ec6a5a02-2cbe-421b-bcf5-54572e000f28-kube-api-access-frdmr\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730398 4804 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730405 4804 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730413 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.759401 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.763233 4804 scope.go:117] "RemoveContainer" containerID="b936b1f85b5d914a16d472ff712a5db48c0674a29e82c956ccf023610946a7cb" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.780602 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831137 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-sg-core-conf-yaml\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831182 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-config-data\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831203 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-run-httpd\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831233 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldd52\" (UniqueName: \"kubernetes.io/projected/90f5a2ef-6224-4af8-8bba-32c689a960f1-kube-api-access-ldd52\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831250 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-scripts\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831302 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-combined-ca-bundle\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831348 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-log-httpd\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831368 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-ceilometer-tls-certs\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831632 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.832448 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.833003 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.833574 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.836831 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.837770 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-scripts" (OuterVolumeSpecName: "scripts") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.838516 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f5a2ef-6224-4af8-8bba-32c689a960f1-kube-api-access-ldd52" (OuterVolumeSpecName: "kube-api-access-ldd52") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "kube-api-access-ldd52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.872811 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.884491 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.924068 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.930266 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-config-data" (OuterVolumeSpecName: "config-data") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932758 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-scripts\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932806 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-internal-tls-certs\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932843 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-public-tls-certs\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932865 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-combined-ca-bundle\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932910 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ef8b43-de59-45f8-9c2a-765c5709054b-logs\") pod \"82ef8b43-de59-45f8-9c2a-765c5709054b\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932931 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dff5q\" (UniqueName: \"kubernetes.io/projected/82ef8b43-de59-45f8-9c2a-765c5709054b-kube-api-access-dff5q\") pod \"82ef8b43-de59-45f8-9c2a-765c5709054b\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932946 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-fernet-keys\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932978 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-config-data\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933001 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data\") pod \"82ef8b43-de59-45f8-9c2a-765c5709054b\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933056 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx5wp\" (UniqueName: \"kubernetes.io/projected/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-kube-api-access-qx5wp\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933078 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-combined-ca-bundle\") pod \"82ef8b43-de59-45f8-9c2a-765c5709054b\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933180 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-credential-keys\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933221 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data-custom\") pod \"82ef8b43-de59-45f8-9c2a-765c5709054b\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933540 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933564 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933573 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933581 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldd52\" (UniqueName: \"kubernetes.io/projected/90f5a2ef-6224-4af8-8bba-32c689a960f1-kube-api-access-ldd52\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933606 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933616 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933625 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933633 4804 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.936181 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ef8b43-de59-45f8-9c2a-765c5709054b-kube-api-access-dff5q" (OuterVolumeSpecName: "kube-api-access-dff5q") pod "82ef8b43-de59-45f8-9c2a-765c5709054b" (UID: "82ef8b43-de59-45f8-9c2a-765c5709054b"). InnerVolumeSpecName "kube-api-access-dff5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.936208 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82ef8b43-de59-45f8-9c2a-765c5709054b-logs" (OuterVolumeSpecName: "logs") pod "82ef8b43-de59-45f8-9c2a-765c5709054b" (UID: "82ef8b43-de59-45f8-9c2a-765c5709054b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.937263 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.937579 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.937650 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-scripts" (OuterVolumeSpecName: "scripts") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.938049 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-kube-api-access-qx5wp" (OuterVolumeSpecName: "kube-api-access-qx5wp") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "kube-api-access-qx5wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.939270 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "82ef8b43-de59-45f8-9c2a-765c5709054b" (UID: "82ef8b43-de59-45f8-9c2a-765c5709054b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.957959 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82ef8b43-de59-45f8-9c2a-765c5709054b" (UID: "82ef8b43-de59-45f8-9c2a-765c5709054b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.960561 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-config-data" (OuterVolumeSpecName: "config-data") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.961528 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.989194 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.989245 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.991821 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data" (OuterVolumeSpecName: "config-data") pod "82ef8b43-de59-45f8-9c2a-765c5709054b" (UID: "82ef8b43-de59-45f8-9c2a-765c5709054b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035122 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035157 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035168 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035176 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035184 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ef8b43-de59-45f8-9c2a-765c5709054b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035192 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dff5q\" (UniqueName: \"kubernetes.io/projected/82ef8b43-de59-45f8-9c2a-765c5709054b-kube-api-access-dff5q\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035202 4804 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035210 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035220 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035229 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx5wp\" (UniqueName: \"kubernetes.io/projected/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-kube-api-access-qx5wp\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035237 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035245 4804 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035253 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.227746 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.286412 4804 generic.go:334] "Generic (PLEG): container finished" podID="4efe85dc-b64c-4cbe-83f7-89fa462a95a0" containerID="31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3" exitCode=0 Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.286567 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.286618 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f885d959c-vhjh4" event={"ID":"4efe85dc-b64c-4cbe-83f7-89fa462a95a0","Type":"ContainerDied","Data":"31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.286660 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f885d959c-vhjh4" event={"ID":"4efe85dc-b64c-4cbe-83f7-89fa462a95a0","Type":"ContainerDied","Data":"7c39859c40631f277cb9db7ae157687f468c42e18dd7308227c1bac58d71a744"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.286679 4804 scope.go:117] "RemoveContainer" containerID="31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.288249 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8e88e9db-b96d-4009-a4e6-ccbb5be53f85","Type":"ContainerDied","Data":"7a39a79e7a20e9ba4fa85ecd18b271a0dbca751974fc6c0f7c6352d267b04dea"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.288304 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.297174 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" event={"ID":"82ef8b43-de59-45f8-9c2a-765c5709054b","Type":"ContainerDied","Data":"0556907b161f5a19bd7e76c946764eabb51dab90af80f30118fa8d78582a879a"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.297243 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.302789 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xtdr8_ec6a5a02-2cbe-421b-bcf5-54572e000f28/ovn-controller/0.log" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.302918 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8" event={"ID":"ec6a5a02-2cbe-421b-bcf5-54572e000f28","Type":"ContainerDied","Data":"33b738bafa7ea125cb6f8e21be749a37e8dc0b050b5dffa31b3e9875c08ddd2d"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.303095 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.310445 4804 generic.go:334] "Generic (PLEG): container finished" podID="469a0049-480f-4cde-848d-4b11cb54130b" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" exitCode=0 Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.310493 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.310545 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"469a0049-480f-4cde-848d-4b11cb54130b","Type":"ContainerDied","Data":"df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.310572 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"469a0049-480f-4cde-848d-4b11cb54130b","Type":"ContainerDied","Data":"0f20d09f4e22850dccdafc066e7822cd90278816628e2fe4c307f19e6234a0ef"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.314642 4804 generic.go:334] "Generic (PLEG): container finished" podID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerID="e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342" exitCode=0 Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.314755 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerDied","Data":"e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.314784 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerDied","Data":"84fd0aed08998b6cb545affdc4c5c0c2a24e6d8450e334aba33aae6ec80b288a"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.314758 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.334778 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.338619 4804 scope.go:117] "RemoveContainer" containerID="31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3" Jan 28 11:45:42 crc kubenswrapper[4804]: E0128 11:45:42.340224 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3\": container with ID starting with 31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3 not found: ID does not exist" containerID="31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.340265 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3"} err="failed to get container status \"31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3\": rpc error: code = NotFound desc = could not find container \"31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3\": container with ID starting with 31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3 not found: ID does not exist" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.340312 4804 scope.go:117] "RemoveContainer" containerID="87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.341428 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-config-data\") pod \"469a0049-480f-4cde-848d-4b11cb54130b\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.341629 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-combined-ca-bundle\") pod \"469a0049-480f-4cde-848d-4b11cb54130b\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.341804 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqdj4\" (UniqueName: \"kubernetes.io/projected/469a0049-480f-4cde-848d-4b11cb54130b-kube-api-access-xqdj4\") pod \"469a0049-480f-4cde-848d-4b11cb54130b\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.344269 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.359733 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6f885d959c-vhjh4"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.369582 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6f885d959c-vhjh4"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.376604 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5f7496d4bd-26fnt"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.376612 4804 scope.go:117] "RemoveContainer" containerID="bcbdcf39ea5a39e34418c6ab9208339d9f7fde2eca3c37cbb5806710252cf88b" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.389862 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/469a0049-480f-4cde-848d-4b11cb54130b-kube-api-access-xqdj4" (OuterVolumeSpecName: "kube-api-access-xqdj4") pod "469a0049-480f-4cde-848d-4b11cb54130b" (UID: "469a0049-480f-4cde-848d-4b11cb54130b"). InnerVolumeSpecName "kube-api-access-xqdj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.432259 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5f7496d4bd-26fnt"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.433764 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-config-data" (OuterVolumeSpecName: "config-data") pod "469a0049-480f-4cde-848d-4b11cb54130b" (UID: "469a0049-480f-4cde-848d-4b11cb54130b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.433788 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "469a0049-480f-4cde-848d-4b11cb54130b" (UID: "469a0049-480f-4cde-848d-4b11cb54130b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.440489 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xtdr8"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.440567 4804 scope.go:117] "RemoveContainer" containerID="1fe685e535efd281a9b4cf9713641d9161c23425d8abe0134248a2395c6b7208" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.448911 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xtdr8"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.451803 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.451825 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.451838 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqdj4\" (UniqueName: \"kubernetes.io/projected/469a0049-480f-4cde-848d-4b11cb54130b-kube-api-access-xqdj4\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.457055 4804 scope.go:117] "RemoveContainer" containerID="4a2eea6008d67570b3d18ca463796d41c0886d498dab2d5b7ee01d2e5f0bd61d" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.457176 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.463841 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.484515 4804 scope.go:117] "RemoveContainer" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.500319 4804 scope.go:117] "RemoveContainer" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" Jan 28 11:45:42 crc kubenswrapper[4804]: E0128 11:45:42.500672 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a\": container with ID starting with df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a not found: ID does not exist" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.500704 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a"} err="failed to get container status \"df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a\": rpc error: code = NotFound desc = could not find container \"df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a\": container with ID starting with df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a not found: ID does not exist" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.500725 4804 scope.go:117] "RemoveContainer" containerID="1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.520023 4804 scope.go:117] "RemoveContainer" containerID="4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.538126 4804 scope.go:117] "RemoveContainer" containerID="e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.555482 4804 scope.go:117] "RemoveContainer" containerID="e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.570730 4804 scope.go:117] "RemoveContainer" containerID="1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c" Jan 28 11:45:42 crc kubenswrapper[4804]: E0128 11:45:42.571185 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c\": container with ID starting with 1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c not found: ID does not exist" containerID="1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.571218 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c"} err="failed to get container status \"1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c\": rpc error: code = NotFound desc = could not find container \"1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c\": container with ID starting with 1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c not found: ID does not exist" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.571244 4804 scope.go:117] "RemoveContainer" containerID="4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4" Jan 28 11:45:42 crc kubenswrapper[4804]: E0128 11:45:42.571557 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4\": container with ID starting with 4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4 not found: ID does not exist" containerID="4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.571657 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4"} err="failed to get container status \"4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4\": rpc error: code = NotFound desc = could not find container \"4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4\": container with ID starting with 4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4 not found: ID does not exist" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.571807 4804 scope.go:117] "RemoveContainer" containerID="e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342" Jan 28 11:45:42 crc kubenswrapper[4804]: E0128 11:45:42.572308 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342\": container with ID starting with e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342 not found: ID does not exist" containerID="e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.572361 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342"} err="failed to get container status \"e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342\": rpc error: code = NotFound desc = could not find container \"e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342\": container with ID starting with e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342 not found: ID does not exist" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.572395 4804 scope.go:117] "RemoveContainer" containerID="e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4" Jan 28 11:45:42 crc kubenswrapper[4804]: E0128 11:45:42.574107 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4\": container with ID starting with e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4 not found: ID does not exist" containerID="e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.574208 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4"} err="failed to get container status \"e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4\": rpc error: code = NotFound desc = could not find container \"e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4\": container with ID starting with e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4 not found: ID does not exist" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.649334 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.657858 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.924181 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="469a0049-480f-4cde-848d-4b11cb54130b" path="/var/lib/kubelet/pods/469a0049-480f-4cde-848d-4b11cb54130b/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.924705 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4efe85dc-b64c-4cbe-83f7-89fa462a95a0" path="/var/lib/kubelet/pods/4efe85dc-b64c-4cbe-83f7-89fa462a95a0/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.926080 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" path="/var/lib/kubelet/pods/76d127f1-97d9-4552-9bdb-b3482a45951d/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.927179 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" path="/var/lib/kubelet/pods/82ef8b43-de59-45f8-9c2a-765c5709054b/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.927787 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" path="/var/lib/kubelet/pods/8e88e9db-b96d-4009-a4e6-ccbb5be53f85/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.928334 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" path="/var/lib/kubelet/pods/90f5a2ef-6224-4af8-8bba-32c689a960f1/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.929461 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" path="/var/lib/kubelet/pods/ec6a5a02-2cbe-421b-bcf5-54572e000f28/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.930136 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edcdd787-6628-49ee-abcf-0146c096f547" path="/var/lib/kubelet/pods/edcdd787-6628-49ee-abcf-0146c096f547/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.931452 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" path="/var/lib/kubelet/pods/f7c5c969-c4c2-4f76-b3c6-152473159e78/volumes" Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.065653 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.066579 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.066749 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.067012 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.067064 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.070181 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.075894 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.076004 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:45:47 crc kubenswrapper[4804]: E0128 11:45:47.156701 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 11:45:47 crc kubenswrapper[4804]: E0128 11:45:47.158274 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 11:45:47 crc kubenswrapper[4804]: E0128 11:45:47.159296 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 11:45:47 crc kubenswrapper[4804]: E0128 11:45:47.159342 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="galera" Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.065187 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.065685 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.066119 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.066193 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.066626 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.068427 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.069668 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.069708 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:45:51 crc kubenswrapper[4804]: I0128 11:45:51.433551 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7d88fd9b89-w66bx" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.167:9696/\": dial tcp 10.217.0.167:9696: connect: connection refused" Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.065547 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.066483 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.066860 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.067215 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.067908 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.075221 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.077358 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.077464 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.405766 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.441970 4804 generic.go:334] "Generic (PLEG): container finished" podID="095bc753-88c4-456c-a3ae-aa0040a76338" containerID="5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe" exitCode=0 Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.442021 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d88fd9b89-w66bx" event={"ID":"095bc753-88c4-456c-a3ae-aa0040a76338","Type":"ContainerDied","Data":"5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe"} Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.442057 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d88fd9b89-w66bx" event={"ID":"095bc753-88c4-456c-a3ae-aa0040a76338","Type":"ContainerDied","Data":"d66804f71c7164aff0af828551ca8929bfd4e365e7c25ea56443ca4b0d53463e"} Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.442080 4804 scope.go:117] "RemoveContainer" containerID="789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.442143 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.468223 4804 scope.go:117] "RemoveContainer" containerID="5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.494753 4804 scope.go:117] "RemoveContainer" containerID="789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f" Jan 28 11:45:55 crc kubenswrapper[4804]: E0128 11:45:55.495232 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f\": container with ID starting with 789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f not found: ID does not exist" containerID="789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.495269 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f"} err="failed to get container status \"789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f\": rpc error: code = NotFound desc = could not find container \"789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f\": container with ID starting with 789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f not found: ID does not exist" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.495294 4804 scope.go:117] "RemoveContainer" containerID="5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe" Jan 28 11:45:55 crc kubenswrapper[4804]: E0128 11:45:55.495531 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe\": container with ID starting with 5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe not found: ID does not exist" containerID="5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.495559 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe"} err="failed to get container status \"5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe\": rpc error: code = NotFound desc = could not find container \"5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe\": container with ID starting with 5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe not found: ID does not exist" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542533 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9c5k\" (UniqueName: \"kubernetes.io/projected/095bc753-88c4-456c-a3ae-aa0040a76338-kube-api-access-q9c5k\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542645 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-internal-tls-certs\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542803 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-ovndb-tls-certs\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542855 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-config\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542899 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-httpd-config\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542937 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-public-tls-certs\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542975 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-combined-ca-bundle\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.550372 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095bc753-88c4-456c-a3ae-aa0040a76338-kube-api-access-q9c5k" (OuterVolumeSpecName: "kube-api-access-q9c5k") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "kube-api-access-q9c5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.560215 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.583118 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-config" (OuterVolumeSpecName: "config") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.587220 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.591090 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.600466 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.616818 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644189 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9c5k\" (UniqueName: \"kubernetes.io/projected/095bc753-88c4-456c-a3ae-aa0040a76338-kube-api-access-q9c5k\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644228 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644239 4804 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644248 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644257 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644266 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644277 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.781770 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7d88fd9b89-w66bx"] Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.789023 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7d88fd9b89-w66bx"] Jan 28 11:45:56 crc kubenswrapper[4804]: I0128 11:45:56.927588 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" path="/var/lib/kubelet/pods/095bc753-88c4-456c-a3ae-aa0040a76338/volumes" Jan 28 11:45:57 crc kubenswrapper[4804]: E0128 11:45:57.157066 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 11:45:57 crc kubenswrapper[4804]: E0128 11:45:57.158375 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 11:45:57 crc kubenswrapper[4804]: E0128 11:45:57.159739 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 11:45:57 crc kubenswrapper[4804]: E0128 11:45:57.159824 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="galera" Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.065403 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.066153 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.066544 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.066639 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.068182 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.070441 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.072057 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.072091 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.115399 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282442 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-generated\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282491 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-default\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282539 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-galera-tls-certs\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282577 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-kolla-config\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282616 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282637 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-combined-ca-bundle\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282703 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-operator-scripts\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282739 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb78m\" (UniqueName: \"kubernetes.io/projected/33d56e9c-416a-4816-81a7-8def89c20c8e-kube-api-access-fb78m\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.283561 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.283613 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.284056 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.284373 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.288568 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d56e9c-416a-4816-81a7-8def89c20c8e-kube-api-access-fb78m" (OuterVolumeSpecName: "kube-api-access-fb78m") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "kube-api-access-fb78m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.292309 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.307221 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.334951 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.383853 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.383970 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.383982 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.383990 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb78m\" (UniqueName: \"kubernetes.io/projected/33d56e9c-416a-4816-81a7-8def89c20c8e-kube-api-access-fb78m\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.384001 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.384009 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.384016 4804 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.384025 4804 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.398249 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.471922 4804 generic.go:334] "Generic (PLEG): container finished" podID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" exitCode=0 Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.471964 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33d56e9c-416a-4816-81a7-8def89c20c8e","Type":"ContainerDied","Data":"5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361"} Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.472017 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33d56e9c-416a-4816-81a7-8def89c20c8e","Type":"ContainerDied","Data":"a6f77cd6c96b39492fe76acbd919310cca2dbd61ed6cf94d721e54f9cb0227d1"} Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.472057 4804 scope.go:117] "RemoveContainer" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.472196 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.485977 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.506957 4804 scope.go:117] "RemoveContainer" containerID="111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.509466 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.514657 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.526445 4804 scope.go:117] "RemoveContainer" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.526947 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361\": container with ID starting with 5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361 not found: ID does not exist" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.527035 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361"} err="failed to get container status \"5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361\": rpc error: code = NotFound desc = could not find container \"5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361\": container with ID starting with 5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361 not found: ID does not exist" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.527108 4804 scope.go:117] "RemoveContainer" containerID="111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340" Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.527599 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340\": container with ID starting with 111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340 not found: ID does not exist" containerID="111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.527639 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340"} err="failed to get container status \"111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340\": rpc error: code = NotFound desc = could not find container \"111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340\": container with ID starting with 111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340 not found: ID does not exist" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.924013 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" path="/var/lib/kubelet/pods/33d56e9c-416a-4816-81a7-8def89c20c8e/volumes" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.067625 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pfzkj_9d301959-ed06-4b22-8e97-f3fc9a9bc491/ovs-vswitchd/0.log" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.068748 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139017 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d301959-ed06-4b22-8e97-f3fc9a9bc491-scripts\") pod \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139061 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-run\") pod \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139113 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djwbx\" (UniqueName: \"kubernetes.io/projected/9d301959-ed06-4b22-8e97-f3fc9a9bc491-kube-api-access-djwbx\") pod \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139139 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-lib\") pod \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139161 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-run" (OuterVolumeSpecName: "var-run") pod "9d301959-ed06-4b22-8e97-f3fc9a9bc491" (UID: "9d301959-ed06-4b22-8e97-f3fc9a9bc491"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139245 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-etc-ovs\") pod \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139283 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-lib" (OuterVolumeSpecName: "var-lib") pod "9d301959-ed06-4b22-8e97-f3fc9a9bc491" (UID: "9d301959-ed06-4b22-8e97-f3fc9a9bc491"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139301 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-log\") pod \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139307 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "9d301959-ed06-4b22-8e97-f3fc9a9bc491" (UID: "9d301959-ed06-4b22-8e97-f3fc9a9bc491"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139392 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-log" (OuterVolumeSpecName: "var-log") pod "9d301959-ed06-4b22-8e97-f3fc9a9bc491" (UID: "9d301959-ed06-4b22-8e97-f3fc9a9bc491"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139680 4804 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139691 4804 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-lib\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139700 4804 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139709 4804 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.140207 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d301959-ed06-4b22-8e97-f3fc9a9bc491-scripts" (OuterVolumeSpecName: "scripts") pod "9d301959-ed06-4b22-8e97-f3fc9a9bc491" (UID: "9d301959-ed06-4b22-8e97-f3fc9a9bc491"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.144107 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d301959-ed06-4b22-8e97-f3fc9a9bc491-kube-api-access-djwbx" (OuterVolumeSpecName: "kube-api-access-djwbx") pod "9d301959-ed06-4b22-8e97-f3fc9a9bc491" (UID: "9d301959-ed06-4b22-8e97-f3fc9a9bc491"). InnerVolumeSpecName "kube-api-access-djwbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.240614 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d301959-ed06-4b22-8e97-f3fc9a9bc491-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.240653 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djwbx\" (UniqueName: \"kubernetes.io/projected/9d301959-ed06-4b22-8e97-f3fc9a9bc491-kube-api-access-djwbx\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.522536 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pfzkj_9d301959-ed06-4b22-8e97-f3fc9a9bc491/ovs-vswitchd/0.log" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.523401 4804 generic.go:334] "Generic (PLEG): container finished" podID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" exitCode=137 Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.523438 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerDied","Data":"27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067"} Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.523464 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerDied","Data":"2ef238b63ba108007593ebb8599aaea3fae02c4b5040dd8085355ce0141a6ab3"} Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.523481 4804 scope.go:117] "RemoveContainer" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.523511 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.608442 4804 scope.go:117] "RemoveContainer" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.608617 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-pfzkj"] Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.614657 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-pfzkj"] Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.630987 4804 scope.go:117] "RemoveContainer" containerID="67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.656563 4804 scope.go:117] "RemoveContainer" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" Jan 28 11:46:02 crc kubenswrapper[4804]: E0128 11:46:02.657587 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067\": container with ID starting with 27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067 not found: ID does not exist" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.657627 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067"} err="failed to get container status \"27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067\": rpc error: code = NotFound desc = could not find container \"27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067\": container with ID starting with 27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067 not found: ID does not exist" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.657653 4804 scope.go:117] "RemoveContainer" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" Jan 28 11:46:02 crc kubenswrapper[4804]: E0128 11:46:02.657908 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784\": container with ID starting with b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 not found: ID does not exist" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.657932 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784"} err="failed to get container status \"b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784\": rpc error: code = NotFound desc = could not find container \"b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784\": container with ID starting with b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 not found: ID does not exist" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.657950 4804 scope.go:117] "RemoveContainer" containerID="67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d" Jan 28 11:46:02 crc kubenswrapper[4804]: E0128 11:46:02.658173 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d\": container with ID starting with 67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d not found: ID does not exist" containerID="67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.658198 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d"} err="failed to get container status \"67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d\": rpc error: code = NotFound desc = could not find container \"67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d\": container with ID starting with 67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d not found: ID does not exist" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.922605 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" path="/var/lib/kubelet/pods/9d301959-ed06-4b22-8e97-f3fc9a9bc491/volumes" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.945277 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.051558 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-cache\") pod \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.051727 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.051822 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-lock\") pod \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.051844 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-combined-ca-bundle\") pod \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.051869 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.051939 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2q8t\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-kube-api-access-t2q8t\") pod \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.052344 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-cache" (OuterVolumeSpecName: "cache") pod "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.052407 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-lock" (OuterVolumeSpecName: "lock") pod "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.058262 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "swift") pod "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.058426 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.059801 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-kube-api-access-t2q8t" (OuterVolumeSpecName: "kube-api-access-t2q8t") pod "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc"). InnerVolumeSpecName "kube-api-access-t2q8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.153371 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.153516 4804 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-lock\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.153528 4804 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.153538 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2q8t\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-kube-api-access-t2q8t\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.153550 4804 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-cache\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.167762 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.254336 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.298282 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.355443 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.544156 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb" exitCode=137 Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.544195 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb"} Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.544244 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"4a5bec567872839575faf98626366f5cc236d0134aa37c746f2c87478bb70e91"} Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.544250 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.544263 4804 scope.go:117] "RemoveContainer" containerID="3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.572042 4804 scope.go:117] "RemoveContainer" containerID="a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.578632 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.586496 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.610976 4804 scope.go:117] "RemoveContainer" containerID="43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.629522 4804 scope.go:117] "RemoveContainer" containerID="02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.645856 4804 scope.go:117] "RemoveContainer" containerID="88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.660172 4804 scope.go:117] "RemoveContainer" containerID="f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.676524 4804 scope.go:117] "RemoveContainer" containerID="5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.691918 4804 scope.go:117] "RemoveContainer" containerID="e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.710828 4804 scope.go:117] "RemoveContainer" containerID="ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.731230 4804 scope.go:117] "RemoveContainer" containerID="fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.750047 4804 scope.go:117] "RemoveContainer" containerID="a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.772542 4804 scope.go:117] "RemoveContainer" containerID="a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.791500 4804 scope.go:117] "RemoveContainer" containerID="2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.813844 4804 scope.go:117] "RemoveContainer" containerID="1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.830729 4804 scope.go:117] "RemoveContainer" containerID="c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.848695 4804 scope.go:117] "RemoveContainer" containerID="3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.849170 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb\": container with ID starting with 3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb not found: ID does not exist" containerID="3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.849209 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb"} err="failed to get container status \"3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb\": rpc error: code = NotFound desc = could not find container \"3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb\": container with ID starting with 3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.849235 4804 scope.go:117] "RemoveContainer" containerID="a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.849526 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b\": container with ID starting with a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b not found: ID does not exist" containerID="a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.849557 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b"} err="failed to get container status \"a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b\": rpc error: code = NotFound desc = could not find container \"a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b\": container with ID starting with a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.849589 4804 scope.go:117] "RemoveContainer" containerID="43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.849937 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e\": container with ID starting with 43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e not found: ID does not exist" containerID="43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.849967 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e"} err="failed to get container status \"43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e\": rpc error: code = NotFound desc = could not find container \"43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e\": container with ID starting with 43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.849985 4804 scope.go:117] "RemoveContainer" containerID="02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.850216 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5\": container with ID starting with 02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5 not found: ID does not exist" containerID="02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.850242 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5"} err="failed to get container status \"02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5\": rpc error: code = NotFound desc = could not find container \"02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5\": container with ID starting with 02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.850261 4804 scope.go:117] "RemoveContainer" containerID="88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.850554 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3\": container with ID starting with 88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3 not found: ID does not exist" containerID="88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.850584 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3"} err="failed to get container status \"88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3\": rpc error: code = NotFound desc = could not find container \"88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3\": container with ID starting with 88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.850602 4804 scope.go:117] "RemoveContainer" containerID="f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.850854 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d\": container with ID starting with f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d not found: ID does not exist" containerID="f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.850901 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d"} err="failed to get container status \"f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d\": rpc error: code = NotFound desc = could not find container \"f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d\": container with ID starting with f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.850921 4804 scope.go:117] "RemoveContainer" containerID="5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.851303 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16\": container with ID starting with 5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16 not found: ID does not exist" containerID="5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.851331 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16"} err="failed to get container status \"5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16\": rpc error: code = NotFound desc = could not find container \"5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16\": container with ID starting with 5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.851346 4804 scope.go:117] "RemoveContainer" containerID="e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.851627 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657\": container with ID starting with e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657 not found: ID does not exist" containerID="e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.851681 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657"} err="failed to get container status \"e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657\": rpc error: code = NotFound desc = could not find container \"e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657\": container with ID starting with e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.851719 4804 scope.go:117] "RemoveContainer" containerID="ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.852062 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161\": container with ID starting with ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161 not found: ID does not exist" containerID="ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.852093 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161"} err="failed to get container status \"ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161\": rpc error: code = NotFound desc = could not find container \"ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161\": container with ID starting with ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.852113 4804 scope.go:117] "RemoveContainer" containerID="fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.852361 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6\": container with ID starting with fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6 not found: ID does not exist" containerID="fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.852388 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6"} err="failed to get container status \"fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6\": rpc error: code = NotFound desc = could not find container \"fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6\": container with ID starting with fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.852402 4804 scope.go:117] "RemoveContainer" containerID="a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.852849 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20\": container with ID starting with a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20 not found: ID does not exist" containerID="a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.852876 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20"} err="failed to get container status \"a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20\": rpc error: code = NotFound desc = could not find container \"a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20\": container with ID starting with a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.852908 4804 scope.go:117] "RemoveContainer" containerID="a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.853128 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5\": container with ID starting with a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5 not found: ID does not exist" containerID="a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.853159 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5"} err="failed to get container status \"a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5\": rpc error: code = NotFound desc = could not find container \"a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5\": container with ID starting with a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.853183 4804 scope.go:117] "RemoveContainer" containerID="2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.853397 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a\": container with ID starting with 2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a not found: ID does not exist" containerID="2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.853425 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a"} err="failed to get container status \"2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a\": rpc error: code = NotFound desc = could not find container \"2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a\": container with ID starting with 2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.853445 4804 scope.go:117] "RemoveContainer" containerID="1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.853699 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb\": container with ID starting with 1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb not found: ID does not exist" containerID="1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.853727 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb"} err="failed to get container status \"1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb\": rpc error: code = NotFound desc = could not find container \"1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb\": container with ID starting with 1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.853745 4804 scope.go:117] "RemoveContainer" containerID="c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.854083 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf\": container with ID starting with c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf not found: ID does not exist" containerID="c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.854114 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf"} err="failed to get container status \"c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf\": rpc error: code = NotFound desc = could not find container \"c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf\": container with ID starting with c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf not found: ID does not exist" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.533612 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-542mk"] Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540014 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="sg-core" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540037 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="sg-core" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540052 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540059 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-log" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540070 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540078 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540092 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540099 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540110 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-notification-agent" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540118 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-notification-agent" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540130 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="galera" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540138 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="galera" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540152 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540159 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540172 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="openstack-network-exporter" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540179 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="openstack-network-exporter" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540189 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540197 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener-log" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540208 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="ovn-northd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540215 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="ovn-northd" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540226 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-api" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540234 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-api" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540245 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" containerName="kube-state-metrics" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540253 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" containerName="kube-state-metrics" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540262 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-updater" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540268 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-updater" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540281 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540288 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540301 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="rabbitmq" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540309 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="rabbitmq" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540323 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540330 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540343 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540350 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540358 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server-init" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540365 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server-init" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540374 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540382 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-log" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540395 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerName="mariadb-account-create-update" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540403 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerName="mariadb-account-create-update" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540414 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerName="mariadb-account-create-update" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540422 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerName="mariadb-account-create-update" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540436 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="rsync" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540444 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="rsync" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540453 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d47089ce-8b52-4bd3-a30e-04736fed01fc" containerName="memcached" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540461 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d47089ce-8b52-4bd3-a30e-04736fed01fc" containerName="memcached" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540472 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="rabbitmq" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540479 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="rabbitmq" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540490 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="proxy-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540497 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="proxy-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540506 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540514 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540526 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540535 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-server" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540544 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540551 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540559 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" containerName="nova-cell1-conductor-conductor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540567 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" containerName="nova-cell1-conductor-conductor" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540578 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540585 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-server" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540597 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="setup-container" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540605 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="setup-container" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540618 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="setup-container" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540624 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="setup-container" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540636 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-central-agent" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540647 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-central-agent" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540658 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469a0049-480f-4cde-848d-4b11cb54130b" containerName="nova-scheduler-scheduler" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540666 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="469a0049-480f-4cde-848d-4b11cb54130b" containerName="nova-scheduler-scheduler" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540675 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540683 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540693 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540701 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540713 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540721 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540733 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-reaper" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540740 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-reaper" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540753 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="swift-recon-cron" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540761 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="swift-recon-cron" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540772 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540780 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540792 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540799 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-log" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540828 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-api" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540836 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-api" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540843 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540848 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540856 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4efe85dc-b64c-4cbe-83f7-89fa462a95a0" containerName="keystone-api" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540862 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efe85dc-b64c-4cbe-83f7-89fa462a95a0" containerName="keystone-api" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540869 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540874 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-server" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540913 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-expirer" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540921 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-expirer" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540931 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540938 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540947 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540956 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker-log" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540964 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-updater" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540971 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-updater" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540982 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="mysql-bootstrap" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540988 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="mysql-bootstrap" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541492 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541514 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="469a0049-480f-4cde-848d-4b11cb54130b" containerName="nova-scheduler-scheduler" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541549 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-updater" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541566 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d47089ce-8b52-4bd3-a30e-04736fed01fc" containerName="memcached" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541577 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="rabbitmq" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541585 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541596 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-central-agent" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541605 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541641 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-expirer" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541653 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-api" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541967 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541986 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="sg-core" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541994 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerName="mariadb-account-create-update" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542006 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542018 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="ovn-northd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542127 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-reaper" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542142 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542154 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542165 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542204 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="swift-recon-cron" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542492 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-notification-agent" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542512 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542661 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542676 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542691 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="openstack-network-exporter" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542792 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerName="mariadb-account-create-update" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542845 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542858 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542870 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543027 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-api" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543037 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543045 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543053 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543062 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="proxy-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543185 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543200 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-updater" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543207 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543296 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543325 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543336 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" containerName="kube-state-metrics" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543344 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" containerName="nova-cell1-conductor-conductor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543542 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="rabbitmq" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543552 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="galera" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543561 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543571 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="rsync" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543585 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4efe85dc-b64c-4cbe-83f7-89fa462a95a0" containerName="keystone-api" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.545626 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.555217 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-542mk"] Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.584872 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwlwl\" (UniqueName: \"kubernetes.io/projected/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-kube-api-access-xwlwl\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.584947 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-catalog-content\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.585086 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-utilities\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.686771 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-utilities\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.686848 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwlwl\" (UniqueName: \"kubernetes.io/projected/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-kube-api-access-xwlwl\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.686906 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-catalog-content\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.687515 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-utilities\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.687525 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-catalog-content\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.718373 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwlwl\" (UniqueName: \"kubernetes.io/projected/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-kube-api-access-xwlwl\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.899715 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.925961 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" path="/var/lib/kubelet/pods/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc/volumes" Jan 28 11:46:05 crc kubenswrapper[4804]: I0128 11:46:05.325487 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-542mk"] Jan 28 11:46:05 crc kubenswrapper[4804]: W0128 11:46:05.331590 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97cb07bd_2024_4cd4_aed6_86ccdfcf50b5.slice/crio-09e0a1a7b843aecaf209e41ada098ba9a5b364e573b5992cfe1625b0b95ef441 WatchSource:0}: Error finding container 09e0a1a7b843aecaf209e41ada098ba9a5b364e573b5992cfe1625b0b95ef441: Status 404 returned error can't find the container with id 09e0a1a7b843aecaf209e41ada098ba9a5b364e573b5992cfe1625b0b95ef441 Jan 28 11:46:05 crc kubenswrapper[4804]: I0128 11:46:05.609386 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerStarted","Data":"4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15"} Jan 28 11:46:05 crc kubenswrapper[4804]: I0128 11:46:05.609845 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerStarted","Data":"09e0a1a7b843aecaf209e41ada098ba9a5b364e573b5992cfe1625b0b95ef441"} Jan 28 11:46:06 crc kubenswrapper[4804]: I0128 11:46:06.619631 4804 generic.go:334] "Generic (PLEG): container finished" podID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerID="4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15" exitCode=0 Jan 28 11:46:06 crc kubenswrapper[4804]: I0128 11:46:06.619685 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerDied","Data":"4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15"} Jan 28 11:46:08 crc kubenswrapper[4804]: I0128 11:46:08.640581 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerStarted","Data":"a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070"} Jan 28 11:46:09 crc kubenswrapper[4804]: I0128 11:46:09.648900 4804 generic.go:334] "Generic (PLEG): container finished" podID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerID="a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070" exitCode=0 Jan 28 11:46:09 crc kubenswrapper[4804]: I0128 11:46:09.649165 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerDied","Data":"a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070"} Jan 28 11:46:10 crc kubenswrapper[4804]: I0128 11:46:10.667696 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerStarted","Data":"1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17"} Jan 28 11:46:10 crc kubenswrapper[4804]: I0128 11:46:10.690666 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-542mk" podStartSLOduration=2.998582142 podStartE2EDuration="6.690641375s" podCreationTimestamp="2026-01-28 11:46:04 +0000 UTC" firstStartedPulling="2026-01-28 11:46:06.621400054 +0000 UTC m=+1442.416280038" lastFinishedPulling="2026-01-28 11:46:10.313459287 +0000 UTC m=+1446.108339271" observedRunningTime="2026-01-28 11:46:10.688832259 +0000 UTC m=+1446.483712243" watchObservedRunningTime="2026-01-28 11:46:10.690641375 +0000 UTC m=+1446.485521359" Jan 28 11:46:14 crc kubenswrapper[4804]: I0128 11:46:14.900825 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:14 crc kubenswrapper[4804]: I0128 11:46:14.901211 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:15 crc kubenswrapper[4804]: I0128 11:46:15.950309 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-542mk" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="registry-server" probeResult="failure" output=< Jan 28 11:46:15 crc kubenswrapper[4804]: timeout: failed to connect service ":50051" within 1s Jan 28 11:46:15 crc kubenswrapper[4804]: > Jan 28 11:46:24 crc kubenswrapper[4804]: I0128 11:46:24.948057 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:24 crc kubenswrapper[4804]: I0128 11:46:24.995760 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:25 crc kubenswrapper[4804]: I0128 11:46:25.187543 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-542mk"] Jan 28 11:46:26 crc kubenswrapper[4804]: I0128 11:46:26.792154 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-542mk" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="registry-server" containerID="cri-o://1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17" gracePeriod=2 Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.164628 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.317736 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-catalog-content\") pod \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.317799 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-utilities\") pod \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.317818 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwlwl\" (UniqueName: \"kubernetes.io/projected/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-kube-api-access-xwlwl\") pod \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.319279 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-utilities" (OuterVolumeSpecName: "utilities") pod "97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" (UID: "97cb07bd-2024-4cd4-aed6-86ccdfcf50b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.324083 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-kube-api-access-xwlwl" (OuterVolumeSpecName: "kube-api-access-xwlwl") pod "97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" (UID: "97cb07bd-2024-4cd4-aed6-86ccdfcf50b5"). InnerVolumeSpecName "kube-api-access-xwlwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.419855 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwlwl\" (UniqueName: \"kubernetes.io/projected/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-kube-api-access-xwlwl\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.419912 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.468287 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" (UID: "97cb07bd-2024-4cd4-aed6-86ccdfcf50b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.521049 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.801653 4804 generic.go:334] "Generic (PLEG): container finished" podID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerID="1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17" exitCode=0 Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.801693 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerDied","Data":"1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17"} Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.801720 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerDied","Data":"09e0a1a7b843aecaf209e41ada098ba9a5b364e573b5992cfe1625b0b95ef441"} Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.801739 4804 scope.go:117] "RemoveContainer" containerID="1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.801846 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.833133 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-542mk"] Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.835856 4804 scope.go:117] "RemoveContainer" containerID="a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.839668 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-542mk"] Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.876844 4804 scope.go:117] "RemoveContainer" containerID="4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.893996 4804 scope.go:117] "RemoveContainer" containerID="1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17" Jan 28 11:46:27 crc kubenswrapper[4804]: E0128 11:46:27.894449 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17\": container with ID starting with 1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17 not found: ID does not exist" containerID="1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.894501 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17"} err="failed to get container status \"1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17\": rpc error: code = NotFound desc = could not find container \"1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17\": container with ID starting with 1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17 not found: ID does not exist" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.894529 4804 scope.go:117] "RemoveContainer" containerID="a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070" Jan 28 11:46:27 crc kubenswrapper[4804]: E0128 11:46:27.894870 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070\": container with ID starting with a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070 not found: ID does not exist" containerID="a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.894926 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070"} err="failed to get container status \"a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070\": rpc error: code = NotFound desc = could not find container \"a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070\": container with ID starting with a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070 not found: ID does not exist" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.894951 4804 scope.go:117] "RemoveContainer" containerID="4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15" Jan 28 11:46:27 crc kubenswrapper[4804]: E0128 11:46:27.895210 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15\": container with ID starting with 4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15 not found: ID does not exist" containerID="4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.895245 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15"} err="failed to get container status \"4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15\": rpc error: code = NotFound desc = could not find container \"4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15\": container with ID starting with 4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15 not found: ID does not exist" Jan 28 11:46:28 crc kubenswrapper[4804]: I0128 11:46:28.924802 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" path="/var/lib/kubelet/pods/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5/volumes" Jan 28 11:47:12 crc kubenswrapper[4804]: I0128 11:47:12.582557 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:47:12 crc kubenswrapper[4804]: I0128 11:47:12.583172 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.649977 4804 scope.go:117] "RemoveContainer" containerID="7501d75daa32f7ac9da494ff4510c6c7b84e72c6cd5d7a36b873ba97e31ca357" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.679637 4804 scope.go:117] "RemoveContainer" containerID="1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.698114 4804 scope.go:117] "RemoveContainer" containerID="61f6d7d8df2b93d1c2aa1ade5c1c81fe0cb73ba040cbf0a84450d89f676d1c96" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.716993 4804 scope.go:117] "RemoveContainer" containerID="6ce17aece748b9da79e3085fe6d476a5deab47316ec4672ba0cbe650d2deca37" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.741337 4804 scope.go:117] "RemoveContainer" containerID="f3135f22df67a9f998ea737f7764f24294ba0c3f0ee5a1682b6d2623e608a549" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.764812 4804 scope.go:117] "RemoveContainer" containerID="9ebbd370fba6d4ae4e403a102d6071f40119646995ef2452c9e5a36cd8033a5d" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.796691 4804 scope.go:117] "RemoveContainer" containerID="350f3ad47814ad13668216a271a72da43f7b115b973ca0e4f205bd9b83981f82" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.814509 4804 scope.go:117] "RemoveContainer" containerID="083be3913b9cea293776996ed70c579f5b987734d7d6618ce37907eb76d96885" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.833392 4804 scope.go:117] "RemoveContainer" containerID="eb8aeef081bed9fc3291d5cfeded1565dd1b1b9b2083d0292898d1582434080f" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.848855 4804 scope.go:117] "RemoveContainer" containerID="f7789d2bdd1334c4462a3af29ff8ca19fc4d47aa63dc768208c1612ddcee666a" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.863469 4804 scope.go:117] "RemoveContainer" containerID="91137eb6aeea940f4af2b3e77f249fa514f8d6f12484bb39c0b7af92b6cead6f" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.885951 4804 scope.go:117] "RemoveContainer" containerID="5575fa4ddc8773670c0f493f88df21ff86a53d01b7736599cdb3fe2b123bacad" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.901962 4804 scope.go:117] "RemoveContainer" containerID="0e142e02c8a274046814a6325bfd4965bb106ee5efa7e215372b93e33be734e4" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.918241 4804 scope.go:117] "RemoveContainer" containerID="acc629a29baa94b90886caa052a9712308190fcbd858f031b8ca85b990fe85e5" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.935227 4804 scope.go:117] "RemoveContainer" containerID="07d005b2c14a47d4da694ee14fd26759eafe1775650f3812e43c2a15c848c61f" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.953102 4804 scope.go:117] "RemoveContainer" containerID="33b6a6135853b57c0111bf580d3d2c2cfc12a6ddcba054451c960f37e0cda40d" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.970847 4804 scope.go:117] "RemoveContainer" containerID="445cd2aea23cf7159b1e5bbce268d62cd1c9a1d5072f21d98a9181a420bf2e56" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.990932 4804 scope.go:117] "RemoveContainer" containerID="1458a9f0fdf6329fef09a5d8735c3d60b67ac3518f533ed20b00b17805f5df6e" Jan 28 11:47:20 crc kubenswrapper[4804]: I0128 11:47:20.006552 4804 scope.go:117] "RemoveContainer" containerID="647b1f190be0e34804a1719e55a8c2587f822eeb47af8070a4c99ed681d8f789" Jan 28 11:47:20 crc kubenswrapper[4804]: I0128 11:47:20.024269 4804 scope.go:117] "RemoveContainer" containerID="90654b28f7b1bc46ccc040db22917c371a0f4ddcc12c4c2ea186a6c9f6f7e0b1" Jan 28 11:47:20 crc kubenswrapper[4804]: I0128 11:47:20.046510 4804 scope.go:117] "RemoveContainer" containerID="afc5376aa5a4fb69874f078b35845b9a204c99fa74239aab619e23b2ca9f242b" Jan 28 11:47:20 crc kubenswrapper[4804]: I0128 11:47:20.068998 4804 scope.go:117] "RemoveContainer" containerID="630d245e2b53140749f6a43e742aa23a22cf07e20dff45a1938f861c8866cefa" Jan 28 11:47:20 crc kubenswrapper[4804]: I0128 11:47:20.100688 4804 scope.go:117] "RemoveContainer" containerID="17b7bc7812de15b0ba6dad22d3ba3bb61255869891da2c8a992a0d46bd5333d8" Jan 28 11:47:42 crc kubenswrapper[4804]: I0128 11:47:42.581860 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:47:42 crc kubenswrapper[4804]: I0128 11:47:42.582335 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:48:12 crc kubenswrapper[4804]: I0128 11:48:12.581987 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:48:12 crc kubenswrapper[4804]: I0128 11:48:12.582604 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:48:12 crc kubenswrapper[4804]: I0128 11:48:12.582645 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:48:12 crc kubenswrapper[4804]: I0128 11:48:12.583247 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:48:12 crc kubenswrapper[4804]: I0128 11:48:12.583300 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" gracePeriod=600 Jan 28 11:48:13 crc kubenswrapper[4804]: E0128 11:48:13.330271 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:48:13 crc kubenswrapper[4804]: I0128 11:48:13.632852 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" exitCode=0 Jan 28 11:48:13 crc kubenswrapper[4804]: I0128 11:48:13.632968 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d"} Jan 28 11:48:13 crc kubenswrapper[4804]: I0128 11:48:13.633084 4804 scope.go:117] "RemoveContainer" containerID="4bffdd4d5a4ad0d46a47b95458a7c8bdaf05a4c4019b6b412dce10eb63d37e95" Jan 28 11:48:13 crc kubenswrapper[4804]: I0128 11:48:13.633666 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:48:13 crc kubenswrapper[4804]: E0128 11:48:13.634046 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:48:20 crc kubenswrapper[4804]: I0128 11:48:20.354919 4804 scope.go:117] "RemoveContainer" containerID="c678cbe047e0072936e6685fda5e2cdde34f1bc266bf8023e6e395194b174396" Jan 28 11:48:20 crc kubenswrapper[4804]: I0128 11:48:20.395163 4804 scope.go:117] "RemoveContainer" containerID="141148b29896e3f2f9d12c3faec258d3e962851d2411ef8203fd3511f78f472c" Jan 28 11:48:20 crc kubenswrapper[4804]: I0128 11:48:20.416528 4804 scope.go:117] "RemoveContainer" containerID="905c09b793697a4d6c52520b6966a20f7c9e6354b274348d7425039892c0fbb9" Jan 28 11:48:20 crc kubenswrapper[4804]: I0128 11:48:20.463291 4804 scope.go:117] "RemoveContainer" containerID="39f3d9fd533ba3d14095e02fb7f969a867f9aaeea3368bde1bf4f16b61454f75" Jan 28 11:48:20 crc kubenswrapper[4804]: I0128 11:48:20.502144 4804 scope.go:117] "RemoveContainer" containerID="e0578f336cec25aad377224f179ea54ee5afd99b6a706cbe778740c4a7fd261d" Jan 28 11:48:20 crc kubenswrapper[4804]: I0128 11:48:20.537502 4804 scope.go:117] "RemoveContainer" containerID="dc599447325170297407d10ffc4cdfee6dcb5608ba938fdf91f777cfd7556821" Jan 28 11:48:27 crc kubenswrapper[4804]: I0128 11:48:27.915300 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:48:27 crc kubenswrapper[4804]: E0128 11:48:27.916081 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.471380 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d9v6p"] Jan 28 11:48:33 crc kubenswrapper[4804]: E0128 11:48:33.474377 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="extract-content" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.474844 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="extract-content" Jan 28 11:48:33 crc kubenswrapper[4804]: E0128 11:48:33.475352 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="registry-server" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.475582 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="registry-server" Jan 28 11:48:33 crc kubenswrapper[4804]: E0128 11:48:33.475740 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="extract-utilities" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.475856 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="extract-utilities" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.476402 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="registry-server" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.482164 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.487997 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9v6p"] Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.586563 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-utilities\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.586621 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-catalog-content\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.586651 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnd2f\" (UniqueName: \"kubernetes.io/projected/36dace41-3e60-485b-8a38-7678187e37bc-kube-api-access-dnd2f\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.689094 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-utilities\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.689539 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-catalog-content\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.689590 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnd2f\" (UniqueName: \"kubernetes.io/projected/36dace41-3e60-485b-8a38-7678187e37bc-kube-api-access-dnd2f\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.689655 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-utilities\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.690093 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-catalog-content\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.715925 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnd2f\" (UniqueName: \"kubernetes.io/projected/36dace41-3e60-485b-8a38-7678187e37bc-kube-api-access-dnd2f\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.818494 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:34 crc kubenswrapper[4804]: I0128 11:48:34.330597 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9v6p"] Jan 28 11:48:34 crc kubenswrapper[4804]: I0128 11:48:34.780755 4804 generic.go:334] "Generic (PLEG): container finished" podID="36dace41-3e60-485b-8a38-7678187e37bc" containerID="b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690" exitCode=0 Jan 28 11:48:34 crc kubenswrapper[4804]: I0128 11:48:34.780817 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9v6p" event={"ID":"36dace41-3e60-485b-8a38-7678187e37bc","Type":"ContainerDied","Data":"b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690"} Jan 28 11:48:34 crc kubenswrapper[4804]: I0128 11:48:34.780871 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9v6p" event={"ID":"36dace41-3e60-485b-8a38-7678187e37bc","Type":"ContainerStarted","Data":"23614a20679ce59f15145c1e802dbdc5ecc324238e99d3d474c222adfacf2c91"} Jan 28 11:48:36 crc kubenswrapper[4804]: I0128 11:48:36.794720 4804 generic.go:334] "Generic (PLEG): container finished" podID="36dace41-3e60-485b-8a38-7678187e37bc" containerID="a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601" exitCode=0 Jan 28 11:48:36 crc kubenswrapper[4804]: I0128 11:48:36.795012 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9v6p" event={"ID":"36dace41-3e60-485b-8a38-7678187e37bc","Type":"ContainerDied","Data":"a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601"} Jan 28 11:48:37 crc kubenswrapper[4804]: I0128 11:48:37.804840 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9v6p" event={"ID":"36dace41-3e60-485b-8a38-7678187e37bc","Type":"ContainerStarted","Data":"c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d"} Jan 28 11:48:37 crc kubenswrapper[4804]: I0128 11:48:37.820547 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d9v6p" podStartSLOduration=2.284818576 podStartE2EDuration="4.820521592s" podCreationTimestamp="2026-01-28 11:48:33 +0000 UTC" firstStartedPulling="2026-01-28 11:48:34.782068162 +0000 UTC m=+1590.576948146" lastFinishedPulling="2026-01-28 11:48:37.317771178 +0000 UTC m=+1593.112651162" observedRunningTime="2026-01-28 11:48:37.818387474 +0000 UTC m=+1593.613267478" watchObservedRunningTime="2026-01-28 11:48:37.820521592 +0000 UTC m=+1593.615401576" Jan 28 11:48:40 crc kubenswrapper[4804]: I0128 11:48:40.915096 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:48:40 crc kubenswrapper[4804]: E0128 11:48:40.915868 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:48:43 crc kubenswrapper[4804]: I0128 11:48:43.818697 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:43 crc kubenswrapper[4804]: I0128 11:48:43.818761 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:43 crc kubenswrapper[4804]: I0128 11:48:43.866217 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:43 crc kubenswrapper[4804]: I0128 11:48:43.911143 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:44 crc kubenswrapper[4804]: I0128 11:48:44.102500 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9v6p"] Jan 28 11:48:45 crc kubenswrapper[4804]: I0128 11:48:45.861453 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d9v6p" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="registry-server" containerID="cri-o://c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d" gracePeriod=2 Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.299444 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.384247 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-catalog-content\") pod \"36dace41-3e60-485b-8a38-7678187e37bc\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.384338 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-utilities\") pod \"36dace41-3e60-485b-8a38-7678187e37bc\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.384455 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnd2f\" (UniqueName: \"kubernetes.io/projected/36dace41-3e60-485b-8a38-7678187e37bc-kube-api-access-dnd2f\") pod \"36dace41-3e60-485b-8a38-7678187e37bc\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.387910 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-utilities" (OuterVolumeSpecName: "utilities") pod "36dace41-3e60-485b-8a38-7678187e37bc" (UID: "36dace41-3e60-485b-8a38-7678187e37bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.399172 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36dace41-3e60-485b-8a38-7678187e37bc-kube-api-access-dnd2f" (OuterVolumeSpecName: "kube-api-access-dnd2f") pod "36dace41-3e60-485b-8a38-7678187e37bc" (UID: "36dace41-3e60-485b-8a38-7678187e37bc"). InnerVolumeSpecName "kube-api-access-dnd2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.440752 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36dace41-3e60-485b-8a38-7678187e37bc" (UID: "36dace41-3e60-485b-8a38-7678187e37bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.486419 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.486464 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnd2f\" (UniqueName: \"kubernetes.io/projected/36dace41-3e60-485b-8a38-7678187e37bc-kube-api-access-dnd2f\") on node \"crc\" DevicePath \"\"" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.486480 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.870015 4804 generic.go:334] "Generic (PLEG): container finished" podID="36dace41-3e60-485b-8a38-7678187e37bc" containerID="c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d" exitCode=0 Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.870076 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9v6p" event={"ID":"36dace41-3e60-485b-8a38-7678187e37bc","Type":"ContainerDied","Data":"c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d"} Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.870145 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.870383 4804 scope.go:117] "RemoveContainer" containerID="c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.870367 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9v6p" event={"ID":"36dace41-3e60-485b-8a38-7678187e37bc","Type":"ContainerDied","Data":"23614a20679ce59f15145c1e802dbdc5ecc324238e99d3d474c222adfacf2c91"} Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.900925 4804 scope.go:117] "RemoveContainer" containerID="a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.912820 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9v6p"] Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.930589 4804 scope.go:117] "RemoveContainer" containerID="b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.948852 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d9v6p"] Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.948981 4804 scope.go:117] "RemoveContainer" containerID="c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d" Jan 28 11:48:46 crc kubenswrapper[4804]: E0128 11:48:46.949418 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d\": container with ID starting with c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d not found: ID does not exist" containerID="c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.949464 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d"} err="failed to get container status \"c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d\": rpc error: code = NotFound desc = could not find container \"c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d\": container with ID starting with c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d not found: ID does not exist" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.949496 4804 scope.go:117] "RemoveContainer" containerID="a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601" Jan 28 11:48:46 crc kubenswrapper[4804]: E0128 11:48:46.949783 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601\": container with ID starting with a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601 not found: ID does not exist" containerID="a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.949805 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601"} err="failed to get container status \"a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601\": rpc error: code = NotFound desc = could not find container \"a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601\": container with ID starting with a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601 not found: ID does not exist" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.949823 4804 scope.go:117] "RemoveContainer" containerID="b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690" Jan 28 11:48:46 crc kubenswrapper[4804]: E0128 11:48:46.950112 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690\": container with ID starting with b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690 not found: ID does not exist" containerID="b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.950281 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690"} err="failed to get container status \"b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690\": rpc error: code = NotFound desc = could not find container \"b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690\": container with ID starting with b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690 not found: ID does not exist" Jan 28 11:48:48 crc kubenswrapper[4804]: I0128 11:48:48.925716 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36dace41-3e60-485b-8a38-7678187e37bc" path="/var/lib/kubelet/pods/36dace41-3e60-485b-8a38-7678187e37bc/volumes" Jan 28 11:48:55 crc kubenswrapper[4804]: I0128 11:48:55.915206 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:48:55 crc kubenswrapper[4804]: E0128 11:48:55.916028 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:49:09 crc kubenswrapper[4804]: I0128 11:49:09.915364 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:49:09 crc kubenswrapper[4804]: E0128 11:49:09.916213 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.696875 4804 scope.go:117] "RemoveContainer" containerID="4826b18cb81abb4e1ff9ad1e5f7d66bf9704f751e4eaecf9575b178485d52c14" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.770283 4804 scope.go:117] "RemoveContainer" containerID="942dab2562186e8c843d08a81baf4b10000e2f951efd28dd679bda2d6239dabc" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.787921 4804 scope.go:117] "RemoveContainer" containerID="4396681344b1f4b062c4d3af20aad6ea83e5895641201a1d6581293d78a469d6" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.817253 4804 scope.go:117] "RemoveContainer" containerID="1aa2852183ab3447d372d5d5e67a6b2f61d8ddd3d77cfdf97f897ca4044fdfeb" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.846150 4804 scope.go:117] "RemoveContainer" containerID="00fa4f179f72ae4ed60b5277bb72d034bf25e0316d4ff2c0b245c99e5bbbb1c0" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.873699 4804 scope.go:117] "RemoveContainer" containerID="75c0ffcb0c025a38e738831b1e54d6accb5a07b7f29d2b3b100a75e69d401044" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.897158 4804 scope.go:117] "RemoveContainer" containerID="d61b26c6574f005cf741e8617cfd877723c9dba4e0c0da9dc9d5ab35b7c99c44" Jan 28 11:49:21 crc kubenswrapper[4804]: I0128 11:49:21.914343 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:49:21 crc kubenswrapper[4804]: E0128 11:49:21.914598 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:49:36 crc kubenswrapper[4804]: I0128 11:49:36.915827 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:49:36 crc kubenswrapper[4804]: E0128 11:49:36.916724 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:49:48 crc kubenswrapper[4804]: I0128 11:49:48.915675 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:49:48 crc kubenswrapper[4804]: E0128 11:49:48.916580 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:49:59 crc kubenswrapper[4804]: I0128 11:49:59.914851 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:49:59 crc kubenswrapper[4804]: E0128 11:49:59.915866 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:50:12 crc kubenswrapper[4804]: I0128 11:50:12.915782 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:50:12 crc kubenswrapper[4804]: E0128 11:50:12.918970 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:50:21 crc kubenswrapper[4804]: I0128 11:50:21.002608 4804 scope.go:117] "RemoveContainer" containerID="a2eabfea7974e19dcb056faf4aba79a46119c1df2377b8eb64616fb881ba0268" Jan 28 11:50:21 crc kubenswrapper[4804]: I0128 11:50:21.055962 4804 scope.go:117] "RemoveContainer" containerID="14d679b7ac81e4e13ea78d091c6bcc493eebbfb6bcb668dffab054c4661eb685" Jan 28 11:50:23 crc kubenswrapper[4804]: I0128 11:50:23.915207 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:50:23 crc kubenswrapper[4804]: E0128 11:50:23.915760 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:50:36 crc kubenswrapper[4804]: I0128 11:50:36.915628 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:50:36 crc kubenswrapper[4804]: E0128 11:50:36.916406 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:50:51 crc kubenswrapper[4804]: I0128 11:50:51.915367 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:50:51 crc kubenswrapper[4804]: E0128 11:50:51.916110 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:51:04 crc kubenswrapper[4804]: I0128 11:51:04.928513 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:51:04 crc kubenswrapper[4804]: E0128 11:51:04.931397 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:51:15 crc kubenswrapper[4804]: I0128 11:51:15.914589 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:51:15 crc kubenswrapper[4804]: E0128 11:51:15.915080 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:51:21 crc kubenswrapper[4804]: I0128 11:51:21.126194 4804 scope.go:117] "RemoveContainer" containerID="2c2804f6826c0c8a401ed21f9d0d5b1726c6192dce5dc3765fa6bb65769860e7" Jan 28 11:51:21 crc kubenswrapper[4804]: I0128 11:51:21.159312 4804 scope.go:117] "RemoveContainer" containerID="2cf37cb975241a8023292503844e50e2fd76dae6622e27d3a7bdc8476283ee2c" Jan 28 11:51:21 crc kubenswrapper[4804]: I0128 11:51:21.180055 4804 scope.go:117] "RemoveContainer" containerID="91cc51ff2b7594ba6b7c5b83ef291bdad1767dd300aa27e2d6fe9a547161ad93" Jan 28 11:51:21 crc kubenswrapper[4804]: I0128 11:51:21.202519 4804 scope.go:117] "RemoveContainer" containerID="67b5e53f1eb1c67a490461931a62e093efa88d74afa9352d1282f6ea7d2e449a" Jan 28 11:51:30 crc kubenswrapper[4804]: I0128 11:51:30.915132 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:51:30 crc kubenswrapper[4804]: E0128 11:51:30.915788 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:51:44 crc kubenswrapper[4804]: I0128 11:51:44.921507 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:51:44 crc kubenswrapper[4804]: E0128 11:51:44.922295 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:51:57 crc kubenswrapper[4804]: I0128 11:51:57.915577 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:51:57 crc kubenswrapper[4804]: E0128 11:51:57.917253 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:52:10 crc kubenswrapper[4804]: I0128 11:52:10.916038 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:52:10 crc kubenswrapper[4804]: E0128 11:52:10.917271 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:52:22 crc kubenswrapper[4804]: I0128 11:52:22.915321 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:52:22 crc kubenswrapper[4804]: E0128 11:52:22.915989 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:52:36 crc kubenswrapper[4804]: I0128 11:52:36.915899 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:52:36 crc kubenswrapper[4804]: E0128 11:52:36.917302 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:52:50 crc kubenswrapper[4804]: I0128 11:52:50.915370 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:52:50 crc kubenswrapper[4804]: E0128 11:52:50.916161 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:53:03 crc kubenswrapper[4804]: I0128 11:53:03.915548 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:53:03 crc kubenswrapper[4804]: E0128 11:53:03.916098 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:53:18 crc kubenswrapper[4804]: I0128 11:53:18.915152 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:53:19 crc kubenswrapper[4804]: I0128 11:53:19.697726 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"e553f604c379352634978804bed96c120674cc97471ffd0a6e8e24b40cf10903"} Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.787740 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wrld9"] Jan 28 11:55:31 crc kubenswrapper[4804]: E0128 11:55:31.788492 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="extract-utilities" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.788503 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="extract-utilities" Jan 28 11:55:31 crc kubenswrapper[4804]: E0128 11:55:31.788530 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="registry-server" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.788537 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="registry-server" Jan 28 11:55:31 crc kubenswrapper[4804]: E0128 11:55:31.788549 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="extract-content" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.788555 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="extract-content" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.788668 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="registry-server" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.789599 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.800665 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrld9"] Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.876337 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-utilities\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.876398 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-catalog-content\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.876449 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh7vg\" (UniqueName: \"kubernetes.io/projected/151f894b-da15-43bf-8f8e-44b777c23b68-kube-api-access-sh7vg\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.977393 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-utilities\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.977605 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-catalog-content\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.977753 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh7vg\" (UniqueName: \"kubernetes.io/projected/151f894b-da15-43bf-8f8e-44b777c23b68-kube-api-access-sh7vg\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.978540 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-utilities\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.978627 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-catalog-content\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.997472 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh7vg\" (UniqueName: \"kubernetes.io/projected/151f894b-da15-43bf-8f8e-44b777c23b68-kube-api-access-sh7vg\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:32 crc kubenswrapper[4804]: I0128 11:55:32.113459 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:32 crc kubenswrapper[4804]: I0128 11:55:32.540447 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrld9"] Jan 28 11:55:32 crc kubenswrapper[4804]: I0128 11:55:32.586524 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrld9" event={"ID":"151f894b-da15-43bf-8f8e-44b777c23b68","Type":"ContainerStarted","Data":"35a94545608fe9b4b67be83edb17ad0dd73fff1aa646c35e7ed33196ee854ab3"} Jan 28 11:55:33 crc kubenswrapper[4804]: I0128 11:55:33.593869 4804 generic.go:334] "Generic (PLEG): container finished" podID="151f894b-da15-43bf-8f8e-44b777c23b68" containerID="09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542" exitCode=0 Jan 28 11:55:33 crc kubenswrapper[4804]: I0128 11:55:33.593997 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrld9" event={"ID":"151f894b-da15-43bf-8f8e-44b777c23b68","Type":"ContainerDied","Data":"09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542"} Jan 28 11:55:33 crc kubenswrapper[4804]: I0128 11:55:33.596453 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.191258 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dzjjh"] Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.193096 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.210265 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntgh6\" (UniqueName: \"kubernetes.io/projected/b808f833-2a0c-4378-96c7-d4b01ce592c1-kube-api-access-ntgh6\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.210743 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-utilities\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.210799 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-catalog-content\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.225013 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dzjjh"] Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.312015 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntgh6\" (UniqueName: \"kubernetes.io/projected/b808f833-2a0c-4378-96c7-d4b01ce592c1-kube-api-access-ntgh6\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.312074 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-utilities\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.312156 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-catalog-content\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.312569 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-utilities\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.312648 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-catalog-content\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.336687 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntgh6\" (UniqueName: \"kubernetes.io/projected/b808f833-2a0c-4378-96c7-d4b01ce592c1-kube-api-access-ntgh6\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.584633 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.602133 4804 generic.go:334] "Generic (PLEG): container finished" podID="151f894b-da15-43bf-8f8e-44b777c23b68" containerID="d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7" exitCode=0 Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.602181 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrld9" event={"ID":"151f894b-da15-43bf-8f8e-44b777c23b68","Type":"ContainerDied","Data":"d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7"} Jan 28 11:55:35 crc kubenswrapper[4804]: I0128 11:55:35.131278 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dzjjh"] Jan 28 11:55:35 crc kubenswrapper[4804]: W0128 11:55:35.149556 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb808f833_2a0c_4378_96c7_d4b01ce592c1.slice/crio-8ecd95cae896c03c187e63450a86281bcb483889905dd243efa54281739d745f WatchSource:0}: Error finding container 8ecd95cae896c03c187e63450a86281bcb483889905dd243efa54281739d745f: Status 404 returned error can't find the container with id 8ecd95cae896c03c187e63450a86281bcb483889905dd243efa54281739d745f Jan 28 11:55:35 crc kubenswrapper[4804]: I0128 11:55:35.610361 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrld9" event={"ID":"151f894b-da15-43bf-8f8e-44b777c23b68","Type":"ContainerStarted","Data":"e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60"} Jan 28 11:55:35 crc kubenswrapper[4804]: I0128 11:55:35.612603 4804 generic.go:334] "Generic (PLEG): container finished" podID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerID="cb95515fba8896aa8f58f7264a95db4df434bb5d342570ccd7c478bd07868bea" exitCode=0 Jan 28 11:55:35 crc kubenswrapper[4804]: I0128 11:55:35.612654 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzjjh" event={"ID":"b808f833-2a0c-4378-96c7-d4b01ce592c1","Type":"ContainerDied","Data":"cb95515fba8896aa8f58f7264a95db4df434bb5d342570ccd7c478bd07868bea"} Jan 28 11:55:35 crc kubenswrapper[4804]: I0128 11:55:35.612704 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzjjh" event={"ID":"b808f833-2a0c-4378-96c7-d4b01ce592c1","Type":"ContainerStarted","Data":"8ecd95cae896c03c187e63450a86281bcb483889905dd243efa54281739d745f"} Jan 28 11:55:35 crc kubenswrapper[4804]: I0128 11:55:35.636258 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wrld9" podStartSLOduration=3.193877301 podStartE2EDuration="4.63622779s" podCreationTimestamp="2026-01-28 11:55:31 +0000 UTC" firstStartedPulling="2026-01-28 11:55:33.596235202 +0000 UTC m=+2009.391115196" lastFinishedPulling="2026-01-28 11:55:35.038585701 +0000 UTC m=+2010.833465685" observedRunningTime="2026-01-28 11:55:35.629000916 +0000 UTC m=+2011.423880900" watchObservedRunningTime="2026-01-28 11:55:35.63622779 +0000 UTC m=+2011.431107774" Jan 28 11:55:37 crc kubenswrapper[4804]: I0128 11:55:37.625480 4804 generic.go:334] "Generic (PLEG): container finished" podID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerID="0189eb18d908e9eae746ff753a2b4759694081f5e06e0ce412145dc609c746c3" exitCode=0 Jan 28 11:55:37 crc kubenswrapper[4804]: I0128 11:55:37.625590 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzjjh" event={"ID":"b808f833-2a0c-4378-96c7-d4b01ce592c1","Type":"ContainerDied","Data":"0189eb18d908e9eae746ff753a2b4759694081f5e06e0ce412145dc609c746c3"} Jan 28 11:55:38 crc kubenswrapper[4804]: I0128 11:55:38.636124 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzjjh" event={"ID":"b808f833-2a0c-4378-96c7-d4b01ce592c1","Type":"ContainerStarted","Data":"b7c483c788932a843294d04d64bb9c669c6e1f37840bec0f8083f1de9b97b958"} Jan 28 11:55:38 crc kubenswrapper[4804]: I0128 11:55:38.662715 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dzjjh" podStartSLOduration=2.194906669 podStartE2EDuration="4.662690771s" podCreationTimestamp="2026-01-28 11:55:34 +0000 UTC" firstStartedPulling="2026-01-28 11:55:35.613854655 +0000 UTC m=+2011.408734639" lastFinishedPulling="2026-01-28 11:55:38.081638757 +0000 UTC m=+2013.876518741" observedRunningTime="2026-01-28 11:55:38.654082594 +0000 UTC m=+2014.448962588" watchObservedRunningTime="2026-01-28 11:55:38.662690771 +0000 UTC m=+2014.457570775" Jan 28 11:55:42 crc kubenswrapper[4804]: I0128 11:55:42.114277 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:42 crc kubenswrapper[4804]: I0128 11:55:42.114732 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:42 crc kubenswrapper[4804]: I0128 11:55:42.159135 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:42 crc kubenswrapper[4804]: I0128 11:55:42.582836 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:55:42 crc kubenswrapper[4804]: I0128 11:55:42.582947 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:55:42 crc kubenswrapper[4804]: I0128 11:55:42.697488 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:43 crc kubenswrapper[4804]: I0128 11:55:43.976872 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrld9"] Jan 28 11:55:44 crc kubenswrapper[4804]: I0128 11:55:44.584751 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:44 crc kubenswrapper[4804]: I0128 11:55:44.585051 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:44 crc kubenswrapper[4804]: I0128 11:55:44.639396 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:44 crc kubenswrapper[4804]: I0128 11:55:44.688127 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wrld9" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="registry-server" containerID="cri-o://e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60" gracePeriod=2 Jan 28 11:55:44 crc kubenswrapper[4804]: I0128 11:55:44.734009 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.188442 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.287827 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-catalog-content\") pod \"151f894b-da15-43bf-8f8e-44b777c23b68\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.288061 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh7vg\" (UniqueName: \"kubernetes.io/projected/151f894b-da15-43bf-8f8e-44b777c23b68-kube-api-access-sh7vg\") pod \"151f894b-da15-43bf-8f8e-44b777c23b68\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.288184 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-utilities\") pod \"151f894b-da15-43bf-8f8e-44b777c23b68\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.289866 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-utilities" (OuterVolumeSpecName: "utilities") pod "151f894b-da15-43bf-8f8e-44b777c23b68" (UID: "151f894b-da15-43bf-8f8e-44b777c23b68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.298191 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151f894b-da15-43bf-8f8e-44b777c23b68-kube-api-access-sh7vg" (OuterVolumeSpecName: "kube-api-access-sh7vg") pod "151f894b-da15-43bf-8f8e-44b777c23b68" (UID: "151f894b-da15-43bf-8f8e-44b777c23b68"). InnerVolumeSpecName "kube-api-access-sh7vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.390615 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh7vg\" (UniqueName: \"kubernetes.io/projected/151f894b-da15-43bf-8f8e-44b777c23b68-kube-api-access-sh7vg\") on node \"crc\" DevicePath \"\"" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.390661 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.472729 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "151f894b-da15-43bf-8f8e-44b777c23b68" (UID: "151f894b-da15-43bf-8f8e-44b777c23b68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.492205 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.695550 4804 generic.go:334] "Generic (PLEG): container finished" podID="151f894b-da15-43bf-8f8e-44b777c23b68" containerID="e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60" exitCode=0 Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.695645 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.695725 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrld9" event={"ID":"151f894b-da15-43bf-8f8e-44b777c23b68","Type":"ContainerDied","Data":"e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60"} Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.695837 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrld9" event={"ID":"151f894b-da15-43bf-8f8e-44b777c23b68","Type":"ContainerDied","Data":"35a94545608fe9b4b67be83edb17ad0dd73fff1aa646c35e7ed33196ee854ab3"} Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.695868 4804 scope.go:117] "RemoveContainer" containerID="e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.722375 4804 scope.go:117] "RemoveContainer" containerID="d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.731069 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrld9"] Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.736375 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrld9"] Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.747277 4804 scope.go:117] "RemoveContainer" containerID="09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.770139 4804 scope.go:117] "RemoveContainer" containerID="e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60" Jan 28 11:55:45 crc kubenswrapper[4804]: E0128 11:55:45.770551 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60\": container with ID starting with e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60 not found: ID does not exist" containerID="e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.770592 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60"} err="failed to get container status \"e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60\": rpc error: code = NotFound desc = could not find container \"e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60\": container with ID starting with e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60 not found: ID does not exist" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.770627 4804 scope.go:117] "RemoveContainer" containerID="d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7" Jan 28 11:55:45 crc kubenswrapper[4804]: E0128 11:55:45.771102 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7\": container with ID starting with d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7 not found: ID does not exist" containerID="d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.771130 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7"} err="failed to get container status \"d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7\": rpc error: code = NotFound desc = could not find container \"d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7\": container with ID starting with d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7 not found: ID does not exist" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.771148 4804 scope.go:117] "RemoveContainer" containerID="09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542" Jan 28 11:55:45 crc kubenswrapper[4804]: E0128 11:55:45.771421 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542\": container with ID starting with 09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542 not found: ID does not exist" containerID="09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.771461 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542"} err="failed to get container status \"09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542\": rpc error: code = NotFound desc = could not find container \"09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542\": container with ID starting with 09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542 not found: ID does not exist" Jan 28 11:55:46 crc kubenswrapper[4804]: I0128 11:55:46.923605 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" path="/var/lib/kubelet/pods/151f894b-da15-43bf-8f8e-44b777c23b68/volumes" Jan 28 11:55:46 crc kubenswrapper[4804]: I0128 11:55:46.977289 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dzjjh"] Jan 28 11:55:46 crc kubenswrapper[4804]: I0128 11:55:46.977514 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dzjjh" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="registry-server" containerID="cri-o://b7c483c788932a843294d04d64bb9c669c6e1f37840bec0f8083f1de9b97b958" gracePeriod=2 Jan 28 11:55:47 crc kubenswrapper[4804]: I0128 11:55:47.710269 4804 generic.go:334] "Generic (PLEG): container finished" podID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerID="b7c483c788932a843294d04d64bb9c669c6e1f37840bec0f8083f1de9b97b958" exitCode=0 Jan 28 11:55:47 crc kubenswrapper[4804]: I0128 11:55:47.710308 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzjjh" event={"ID":"b808f833-2a0c-4378-96c7-d4b01ce592c1","Type":"ContainerDied","Data":"b7c483c788932a843294d04d64bb9c669c6e1f37840bec0f8083f1de9b97b958"} Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.808931 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.833124 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntgh6\" (UniqueName: \"kubernetes.io/projected/b808f833-2a0c-4378-96c7-d4b01ce592c1-kube-api-access-ntgh6\") pod \"b808f833-2a0c-4378-96c7-d4b01ce592c1\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.833212 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-utilities\") pod \"b808f833-2a0c-4378-96c7-d4b01ce592c1\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.833263 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-catalog-content\") pod \"b808f833-2a0c-4378-96c7-d4b01ce592c1\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.834131 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-utilities" (OuterVolumeSpecName: "utilities") pod "b808f833-2a0c-4378-96c7-d4b01ce592c1" (UID: "b808f833-2a0c-4378-96c7-d4b01ce592c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.838420 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b808f833-2a0c-4378-96c7-d4b01ce592c1-kube-api-access-ntgh6" (OuterVolumeSpecName: "kube-api-access-ntgh6") pod "b808f833-2a0c-4378-96c7-d4b01ce592c1" (UID: "b808f833-2a0c-4378-96c7-d4b01ce592c1"). InnerVolumeSpecName "kube-api-access-ntgh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.934179 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntgh6\" (UniqueName: \"kubernetes.io/projected/b808f833-2a0c-4378-96c7-d4b01ce592c1-kube-api-access-ntgh6\") on node \"crc\" DevicePath \"\"" Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.934210 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.725907 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzjjh" event={"ID":"b808f833-2a0c-4378-96c7-d4b01ce592c1","Type":"ContainerDied","Data":"8ecd95cae896c03c187e63450a86281bcb483889905dd243efa54281739d745f"} Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.725955 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.726237 4804 scope.go:117] "RemoveContainer" containerID="b7c483c788932a843294d04d64bb9c669c6e1f37840bec0f8083f1de9b97b958" Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.733379 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b808f833-2a0c-4378-96c7-d4b01ce592c1" (UID: "b808f833-2a0c-4378-96c7-d4b01ce592c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.743739 4804 scope.go:117] "RemoveContainer" containerID="0189eb18d908e9eae746ff753a2b4759694081f5e06e0ce412145dc609c746c3" Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.744426 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.764336 4804 scope.go:117] "RemoveContainer" containerID="cb95515fba8896aa8f58f7264a95db4df434bb5d342570ccd7c478bd07868bea" Jan 28 11:55:50 crc kubenswrapper[4804]: I0128 11:55:50.056122 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dzjjh"] Jan 28 11:55:50 crc kubenswrapper[4804]: I0128 11:55:50.061932 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dzjjh"] Jan 28 11:55:50 crc kubenswrapper[4804]: I0128 11:55:50.924276 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" path="/var/lib/kubelet/pods/b808f833-2a0c-4378-96c7-d4b01ce592c1/volumes" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.581374 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pbqrx"] Jan 28 11:56:09 crc kubenswrapper[4804]: E0128 11:56:09.582214 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="extract-utilities" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582228 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="extract-utilities" Jan 28 11:56:09 crc kubenswrapper[4804]: E0128 11:56:09.582262 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="extract-content" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582270 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="extract-content" Jan 28 11:56:09 crc kubenswrapper[4804]: E0128 11:56:09.582279 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="extract-content" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582285 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="extract-content" Jan 28 11:56:09 crc kubenswrapper[4804]: E0128 11:56:09.582301 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="extract-utilities" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582307 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="extract-utilities" Jan 28 11:56:09 crc kubenswrapper[4804]: E0128 11:56:09.582315 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="registry-server" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582321 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="registry-server" Jan 28 11:56:09 crc kubenswrapper[4804]: E0128 11:56:09.582329 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="registry-server" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582335 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="registry-server" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582449 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="registry-server" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582461 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="registry-server" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.583574 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.597552 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pbqrx"] Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.619292 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-catalog-content\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.619344 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dz4d\" (UniqueName: \"kubernetes.io/projected/843a2adb-570f-46ac-8c83-791c0891960b-kube-api-access-5dz4d\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.619444 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-utilities\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.720490 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-utilities\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.720561 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-catalog-content\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.720587 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dz4d\" (UniqueName: \"kubernetes.io/projected/843a2adb-570f-46ac-8c83-791c0891960b-kube-api-access-5dz4d\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.721428 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-utilities\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.721452 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-catalog-content\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.743402 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dz4d\" (UniqueName: \"kubernetes.io/projected/843a2adb-570f-46ac-8c83-791c0891960b-kube-api-access-5dz4d\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.901282 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:10 crc kubenswrapper[4804]: I0128 11:56:10.342699 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pbqrx"] Jan 28 11:56:10 crc kubenswrapper[4804]: I0128 11:56:10.857902 4804 generic.go:334] "Generic (PLEG): container finished" podID="843a2adb-570f-46ac-8c83-791c0891960b" containerID="95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc" exitCode=0 Jan 28 11:56:10 crc kubenswrapper[4804]: I0128 11:56:10.858002 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerDied","Data":"95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc"} Jan 28 11:56:10 crc kubenswrapper[4804]: I0128 11:56:10.858205 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerStarted","Data":"272face9840fcdab87bd3e3f81b6bc480565580aadc9dd8b088e8d27d255ed68"} Jan 28 11:56:11 crc kubenswrapper[4804]: I0128 11:56:11.867664 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerStarted","Data":"b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989"} Jan 28 11:56:12 crc kubenswrapper[4804]: I0128 11:56:12.582082 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:56:12 crc kubenswrapper[4804]: I0128 11:56:12.582180 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:56:12 crc kubenswrapper[4804]: I0128 11:56:12.875855 4804 generic.go:334] "Generic (PLEG): container finished" podID="843a2adb-570f-46ac-8c83-791c0891960b" containerID="b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989" exitCode=0 Jan 28 11:56:12 crc kubenswrapper[4804]: I0128 11:56:12.875996 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerDied","Data":"b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989"} Jan 28 11:56:13 crc kubenswrapper[4804]: I0128 11:56:13.884787 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerStarted","Data":"af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967"} Jan 28 11:56:13 crc kubenswrapper[4804]: I0128 11:56:13.905823 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pbqrx" podStartSLOduration=2.485765955 podStartE2EDuration="4.90563282s" podCreationTimestamp="2026-01-28 11:56:09 +0000 UTC" firstStartedPulling="2026-01-28 11:56:10.859686023 +0000 UTC m=+2046.654566007" lastFinishedPulling="2026-01-28 11:56:13.279552888 +0000 UTC m=+2049.074432872" observedRunningTime="2026-01-28 11:56:13.90112031 +0000 UTC m=+2049.696000294" watchObservedRunningTime="2026-01-28 11:56:13.90563282 +0000 UTC m=+2049.700512804" Jan 28 11:56:19 crc kubenswrapper[4804]: I0128 11:56:19.902493 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:19 crc kubenswrapper[4804]: I0128 11:56:19.903069 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:19 crc kubenswrapper[4804]: I0128 11:56:19.940784 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:19 crc kubenswrapper[4804]: I0128 11:56:19.992327 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:20 crc kubenswrapper[4804]: I0128 11:56:20.172071 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pbqrx"] Jan 28 11:56:21 crc kubenswrapper[4804]: I0128 11:56:21.930179 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pbqrx" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="registry-server" containerID="cri-o://af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967" gracePeriod=2 Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.287402 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.391456 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-catalog-content\") pod \"843a2adb-570f-46ac-8c83-791c0891960b\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.391620 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-utilities\") pod \"843a2adb-570f-46ac-8c83-791c0891960b\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.391654 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dz4d\" (UniqueName: \"kubernetes.io/projected/843a2adb-570f-46ac-8c83-791c0891960b-kube-api-access-5dz4d\") pod \"843a2adb-570f-46ac-8c83-791c0891960b\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.392512 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-utilities" (OuterVolumeSpecName: "utilities") pod "843a2adb-570f-46ac-8c83-791c0891960b" (UID: "843a2adb-570f-46ac-8c83-791c0891960b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.396933 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/843a2adb-570f-46ac-8c83-791c0891960b-kube-api-access-5dz4d" (OuterVolumeSpecName: "kube-api-access-5dz4d") pod "843a2adb-570f-46ac-8c83-791c0891960b" (UID: "843a2adb-570f-46ac-8c83-791c0891960b"). InnerVolumeSpecName "kube-api-access-5dz4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.493299 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dz4d\" (UniqueName: \"kubernetes.io/projected/843a2adb-570f-46ac-8c83-791c0891960b-kube-api-access-5dz4d\") on node \"crc\" DevicePath \"\"" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.493340 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.938973 4804 generic.go:334] "Generic (PLEG): container finished" podID="843a2adb-570f-46ac-8c83-791c0891960b" containerID="af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967" exitCode=0 Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.939079 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerDied","Data":"af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967"} Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.941027 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerDied","Data":"272face9840fcdab87bd3e3f81b6bc480565580aadc9dd8b088e8d27d255ed68"} Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.941063 4804 scope.go:117] "RemoveContainer" containerID="af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.939126 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.957200 4804 scope.go:117] "RemoveContainer" containerID="b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.973893 4804 scope.go:117] "RemoveContainer" containerID="95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.994528 4804 scope.go:117] "RemoveContainer" containerID="af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967" Jan 28 11:56:22 crc kubenswrapper[4804]: E0128 11:56:22.994973 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967\": container with ID starting with af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967 not found: ID does not exist" containerID="af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.995011 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967"} err="failed to get container status \"af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967\": rpc error: code = NotFound desc = could not find container \"af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967\": container with ID starting with af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967 not found: ID does not exist" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.995035 4804 scope.go:117] "RemoveContainer" containerID="b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989" Jan 28 11:56:22 crc kubenswrapper[4804]: E0128 11:56:22.995322 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989\": container with ID starting with b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989 not found: ID does not exist" containerID="b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.995346 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989"} err="failed to get container status \"b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989\": rpc error: code = NotFound desc = could not find container \"b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989\": container with ID starting with b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989 not found: ID does not exist" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.995362 4804 scope.go:117] "RemoveContainer" containerID="95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc" Jan 28 11:56:22 crc kubenswrapper[4804]: E0128 11:56:22.995569 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc\": container with ID starting with 95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc not found: ID does not exist" containerID="95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.995596 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc"} err="failed to get container status \"95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc\": rpc error: code = NotFound desc = could not find container \"95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc\": container with ID starting with 95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc not found: ID does not exist" Jan 28 11:56:23 crc kubenswrapper[4804]: I0128 11:56:23.783691 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "843a2adb-570f-46ac-8c83-791c0891960b" (UID: "843a2adb-570f-46ac-8c83-791c0891960b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:56:23 crc kubenswrapper[4804]: I0128 11:56:23.812632 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:56:23 crc kubenswrapper[4804]: I0128 11:56:23.871770 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pbqrx"] Jan 28 11:56:23 crc kubenswrapper[4804]: I0128 11:56:23.879158 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pbqrx"] Jan 28 11:56:24 crc kubenswrapper[4804]: I0128 11:56:24.923995 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="843a2adb-570f-46ac-8c83-791c0891960b" path="/var/lib/kubelet/pods/843a2adb-570f-46ac-8c83-791c0891960b/volumes" Jan 28 11:56:42 crc kubenswrapper[4804]: I0128 11:56:42.581814 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:56:42 crc kubenswrapper[4804]: I0128 11:56:42.582632 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:56:42 crc kubenswrapper[4804]: I0128 11:56:42.582695 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:56:42 crc kubenswrapper[4804]: I0128 11:56:42.584810 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e553f604c379352634978804bed96c120674cc97471ffd0a6e8e24b40cf10903"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:56:42 crc kubenswrapper[4804]: I0128 11:56:42.584938 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://e553f604c379352634978804bed96c120674cc97471ffd0a6e8e24b40cf10903" gracePeriod=600 Jan 28 11:56:43 crc kubenswrapper[4804]: I0128 11:56:43.088958 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="e553f604c379352634978804bed96c120674cc97471ffd0a6e8e24b40cf10903" exitCode=0 Jan 28 11:56:43 crc kubenswrapper[4804]: I0128 11:56:43.089246 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"e553f604c379352634978804bed96c120674cc97471ffd0a6e8e24b40cf10903"} Jan 28 11:56:43 crc kubenswrapper[4804]: I0128 11:56:43.089666 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2"} Jan 28 11:56:43 crc kubenswrapper[4804]: I0128 11:56:43.089686 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:58:42 crc kubenswrapper[4804]: I0128 11:58:42.582694 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:58:42 crc kubenswrapper[4804]: I0128 11:58:42.583282 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:59:12 crc kubenswrapper[4804]: I0128 11:59:12.581863 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:59:12 crc kubenswrapper[4804]: I0128 11:59:12.582520 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:59:42 crc kubenswrapper[4804]: I0128 11:59:42.582210 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:59:42 crc kubenswrapper[4804]: I0128 11:59:42.583026 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:59:42 crc kubenswrapper[4804]: I0128 11:59:42.583137 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:59:42 crc kubenswrapper[4804]: I0128 11:59:42.583805 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:59:42 crc kubenswrapper[4804]: I0128 11:59:42.583861 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" gracePeriod=600 Jan 28 11:59:42 crc kubenswrapper[4804]: E0128 11:59:42.733402 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:59:43 crc kubenswrapper[4804]: I0128 11:59:43.326319 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" exitCode=0 Jan 28 11:59:43 crc kubenswrapper[4804]: I0128 11:59:43.326380 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2"} Jan 28 11:59:43 crc kubenswrapper[4804]: I0128 11:59:43.326437 4804 scope.go:117] "RemoveContainer" containerID="e553f604c379352634978804bed96c120674cc97471ffd0a6e8e24b40cf10903" Jan 28 11:59:43 crc kubenswrapper[4804]: I0128 11:59:43.327233 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 11:59:43 crc kubenswrapper[4804]: E0128 11:59:43.327672 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:59:57 crc kubenswrapper[4804]: I0128 11:59:57.915201 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 11:59:57 crc kubenswrapper[4804]: E0128 11:59:57.915919 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.146305 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6"] Jan 28 12:00:00 crc kubenswrapper[4804]: E0128 12:00:00.146696 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="registry-server" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.146721 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="registry-server" Jan 28 12:00:00 crc kubenswrapper[4804]: E0128 12:00:00.146735 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="extract-utilities" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.146742 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="extract-utilities" Jan 28 12:00:00 crc kubenswrapper[4804]: E0128 12:00:00.146756 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="extract-content" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.146764 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="extract-content" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.146930 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="registry-server" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.147494 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.153451 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.153601 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6"] Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.153657 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.223264 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45sn\" (UniqueName: \"kubernetes.io/projected/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-kube-api-access-l45sn\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.223332 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-config-volume\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.223382 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-secret-volume\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.324238 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l45sn\" (UniqueName: \"kubernetes.io/projected/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-kube-api-access-l45sn\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.324290 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-config-volume\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.324332 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-secret-volume\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.325253 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-config-volume\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.329967 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-secret-volume\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.341599 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l45sn\" (UniqueName: \"kubernetes.io/projected/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-kube-api-access-l45sn\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.466120 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.891155 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6"] Jan 28 12:00:01 crc kubenswrapper[4804]: I0128 12:00:01.463743 4804 generic.go:334] "Generic (PLEG): container finished" podID="aa991fe4-fe41-454b-b0ab-03e5d7a546d7" containerID="622c408f48add91adb423999741bd11717dcabf800c16d5c9d66d66f2f7c526d" exitCode=0 Jan 28 12:00:01 crc kubenswrapper[4804]: I0128 12:00:01.465044 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" event={"ID":"aa991fe4-fe41-454b-b0ab-03e5d7a546d7","Type":"ContainerDied","Data":"622c408f48add91adb423999741bd11717dcabf800c16d5c9d66d66f2f7c526d"} Jan 28 12:00:01 crc kubenswrapper[4804]: I0128 12:00:01.465146 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" event={"ID":"aa991fe4-fe41-454b-b0ab-03e5d7a546d7","Type":"ContainerStarted","Data":"73ff2a24e8df59049dadc8e7977d7a6a20756ce86df0dca087d540534a76bb66"} Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.751654 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.874011 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-secret-volume\") pod \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.874499 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-config-volume\") pod \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.874599 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l45sn\" (UniqueName: \"kubernetes.io/projected/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-kube-api-access-l45sn\") pod \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.875731 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa991fe4-fe41-454b-b0ab-03e5d7a546d7" (UID: "aa991fe4-fe41-454b-b0ab-03e5d7a546d7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.882929 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa991fe4-fe41-454b-b0ab-03e5d7a546d7" (UID: "aa991fe4-fe41-454b-b0ab-03e5d7a546d7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.884496 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-kube-api-access-l45sn" (OuterVolumeSpecName: "kube-api-access-l45sn") pod "aa991fe4-fe41-454b-b0ab-03e5d7a546d7" (UID: "aa991fe4-fe41-454b-b0ab-03e5d7a546d7"). InnerVolumeSpecName "kube-api-access-l45sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.979750 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.979793 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l45sn\" (UniqueName: \"kubernetes.io/projected/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-kube-api-access-l45sn\") on node \"crc\" DevicePath \"\"" Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.979807 4804 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 12:00:03 crc kubenswrapper[4804]: I0128 12:00:03.479129 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" event={"ID":"aa991fe4-fe41-454b-b0ab-03e5d7a546d7","Type":"ContainerDied","Data":"73ff2a24e8df59049dadc8e7977d7a6a20756ce86df0dca087d540534a76bb66"} Jan 28 12:00:03 crc kubenswrapper[4804]: I0128 12:00:03.479462 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73ff2a24e8df59049dadc8e7977d7a6a20756ce86df0dca087d540534a76bb66" Jan 28 12:00:03 crc kubenswrapper[4804]: I0128 12:00:03.479192 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:03 crc kubenswrapper[4804]: I0128 12:00:03.819419 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr"] Jan 28 12:00:03 crc kubenswrapper[4804]: I0128 12:00:03.842847 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr"] Jan 28 12:00:04 crc kubenswrapper[4804]: I0128 12:00:04.923760 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae7433f6-40cb-4caf-8356-10bb93645af5" path="/var/lib/kubelet/pods/ae7433f6-40cb-4caf-8356-10bb93645af5/volumes" Jan 28 12:00:11 crc kubenswrapper[4804]: I0128 12:00:11.915671 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:00:11 crc kubenswrapper[4804]: E0128 12:00:11.916453 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:00:21 crc kubenswrapper[4804]: I0128 12:00:21.408425 4804 scope.go:117] "RemoveContainer" containerID="cc0257ab63b8ce14bac812eeb4ebcfe9baa7187c37d0e2df6e719355693b5895" Jan 28 12:00:25 crc kubenswrapper[4804]: I0128 12:00:25.915426 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:00:25 crc kubenswrapper[4804]: E0128 12:00:25.916400 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:00:39 crc kubenswrapper[4804]: I0128 12:00:39.915189 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:00:39 crc kubenswrapper[4804]: E0128 12:00:39.915903 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:00:52 crc kubenswrapper[4804]: I0128 12:00:52.915058 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:00:52 crc kubenswrapper[4804]: E0128 12:00:52.915760 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:01:06 crc kubenswrapper[4804]: I0128 12:01:06.915677 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:01:06 crc kubenswrapper[4804]: E0128 12:01:06.916510 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:01:20 crc kubenswrapper[4804]: I0128 12:01:20.914937 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:01:20 crc kubenswrapper[4804]: E0128 12:01:20.915661 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:01:32 crc kubenswrapper[4804]: I0128 12:01:32.915580 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:01:32 crc kubenswrapper[4804]: E0128 12:01:32.916550 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:01:45 crc kubenswrapper[4804]: I0128 12:01:45.915601 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:01:45 crc kubenswrapper[4804]: E0128 12:01:45.916340 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:01:58 crc kubenswrapper[4804]: I0128 12:01:58.914761 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:01:58 crc kubenswrapper[4804]: E0128 12:01:58.915250 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:02:13 crc kubenswrapper[4804]: I0128 12:02:13.915504 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:02:13 crc kubenswrapper[4804]: E0128 12:02:13.917962 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:02:25 crc kubenswrapper[4804]: I0128 12:02:25.914815 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:02:25 crc kubenswrapper[4804]: E0128 12:02:25.915446 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:02:39 crc kubenswrapper[4804]: I0128 12:02:39.915467 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:02:39 crc kubenswrapper[4804]: E0128 12:02:39.916193 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:02:50 crc kubenswrapper[4804]: I0128 12:02:50.914907 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:02:50 crc kubenswrapper[4804]: E0128 12:02:50.916530 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:03:04 crc kubenswrapper[4804]: I0128 12:03:04.919964 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:03:04 crc kubenswrapper[4804]: E0128 12:03:04.920851 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:03:17 crc kubenswrapper[4804]: I0128 12:03:17.914982 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:03:17 crc kubenswrapper[4804]: E0128 12:03:17.915725 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:03:30 crc kubenswrapper[4804]: I0128 12:03:30.915557 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:03:30 crc kubenswrapper[4804]: E0128 12:03:30.916732 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:03:43 crc kubenswrapper[4804]: I0128 12:03:43.915751 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:03:43 crc kubenswrapper[4804]: E0128 12:03:43.916806 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:03:54 crc kubenswrapper[4804]: I0128 12:03:54.928228 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:03:54 crc kubenswrapper[4804]: E0128 12:03:54.931678 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:04:07 crc kubenswrapper[4804]: I0128 12:04:07.915095 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:04:07 crc kubenswrapper[4804]: E0128 12:04:07.916489 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:04:22 crc kubenswrapper[4804]: I0128 12:04:22.914709 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:04:22 crc kubenswrapper[4804]: E0128 12:04:22.915741 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:04:35 crc kubenswrapper[4804]: I0128 12:04:35.915665 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:04:35 crc kubenswrapper[4804]: E0128 12:04:35.916314 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:04:47 crc kubenswrapper[4804]: I0128 12:04:47.914752 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:04:48 crc kubenswrapper[4804]: I0128 12:04:48.348444 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"566a846233c878f366e78fe6b0eb0d66e44ee38fa76277e0cb50bdcb6cf30c2e"} Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.652190 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5wsqw"] Jan 28 12:06:28 crc kubenswrapper[4804]: E0128 12:06:28.653431 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa991fe4-fe41-454b-b0ab-03e5d7a546d7" containerName="collect-profiles" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.653447 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa991fe4-fe41-454b-b0ab-03e5d7a546d7" containerName="collect-profiles" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.653653 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa991fe4-fe41-454b-b0ab-03e5d7a546d7" containerName="collect-profiles" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.654843 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.675615 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wsqw"] Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.791017 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gddg\" (UniqueName: \"kubernetes.io/projected/f3b04bfb-68de-453f-b4d7-5000680da6ea-kube-api-access-5gddg\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.791130 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-utilities\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.791160 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-catalog-content\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.893754 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gddg\" (UniqueName: \"kubernetes.io/projected/f3b04bfb-68de-453f-b4d7-5000680da6ea-kube-api-access-5gddg\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.893891 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-utilities\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.893924 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-catalog-content\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.894496 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-catalog-content\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.894754 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-utilities\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.915839 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gddg\" (UniqueName: \"kubernetes.io/projected/f3b04bfb-68de-453f-b4d7-5000680da6ea-kube-api-access-5gddg\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.976190 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:29 crc kubenswrapper[4804]: I0128 12:06:29.545023 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wsqw"] Jan 28 12:06:30 crc kubenswrapper[4804]: I0128 12:06:30.056527 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerID="9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b" exitCode=0 Jan 28 12:06:30 crc kubenswrapper[4804]: I0128 12:06:30.056642 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wsqw" event={"ID":"f3b04bfb-68de-453f-b4d7-5000680da6ea","Type":"ContainerDied","Data":"9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b"} Jan 28 12:06:30 crc kubenswrapper[4804]: I0128 12:06:30.056870 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wsqw" event={"ID":"f3b04bfb-68de-453f-b4d7-5000680da6ea","Type":"ContainerStarted","Data":"5407d7ab986b7fae2d51fbd84ff48673cbe5fbf1ed74c93e833605b4bda3b44a"} Jan 28 12:06:30 crc kubenswrapper[4804]: I0128 12:06:30.058519 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 12:06:32 crc kubenswrapper[4804]: I0128 12:06:32.075523 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerID="4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340" exitCode=0 Jan 28 12:06:32 crc kubenswrapper[4804]: I0128 12:06:32.075607 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wsqw" event={"ID":"f3b04bfb-68de-453f-b4d7-5000680da6ea","Type":"ContainerDied","Data":"4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340"} Jan 28 12:06:33 crc kubenswrapper[4804]: I0128 12:06:33.083577 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wsqw" event={"ID":"f3b04bfb-68de-453f-b4d7-5000680da6ea","Type":"ContainerStarted","Data":"b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04"} Jan 28 12:06:33 crc kubenswrapper[4804]: I0128 12:06:33.104708 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5wsqw" podStartSLOduration=2.65521039 podStartE2EDuration="5.10468501s" podCreationTimestamp="2026-01-28 12:06:28 +0000 UTC" firstStartedPulling="2026-01-28 12:06:30.058276551 +0000 UTC m=+2665.853156535" lastFinishedPulling="2026-01-28 12:06:32.507751171 +0000 UTC m=+2668.302631155" observedRunningTime="2026-01-28 12:06:33.101554563 +0000 UTC m=+2668.896434547" watchObservedRunningTime="2026-01-28 12:06:33.10468501 +0000 UTC m=+2668.899564994" Jan 28 12:06:38 crc kubenswrapper[4804]: I0128 12:06:38.977089 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:38 crc kubenswrapper[4804]: I0128 12:06:38.977694 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:39 crc kubenswrapper[4804]: I0128 12:06:39.027415 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:39 crc kubenswrapper[4804]: I0128 12:06:39.166485 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:39 crc kubenswrapper[4804]: I0128 12:06:39.254258 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wsqw"] Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.139199 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5wsqw" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="registry-server" containerID="cri-o://b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04" gracePeriod=2 Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.532637 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.714288 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-catalog-content\") pod \"f3b04bfb-68de-453f-b4d7-5000680da6ea\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.714346 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gddg\" (UniqueName: \"kubernetes.io/projected/f3b04bfb-68de-453f-b4d7-5000680da6ea-kube-api-access-5gddg\") pod \"f3b04bfb-68de-453f-b4d7-5000680da6ea\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.714430 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-utilities\") pod \"f3b04bfb-68de-453f-b4d7-5000680da6ea\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.715276 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-utilities" (OuterVolumeSpecName: "utilities") pod "f3b04bfb-68de-453f-b4d7-5000680da6ea" (UID: "f3b04bfb-68de-453f-b4d7-5000680da6ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.723072 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3b04bfb-68de-453f-b4d7-5000680da6ea-kube-api-access-5gddg" (OuterVolumeSpecName: "kube-api-access-5gddg") pod "f3b04bfb-68de-453f-b4d7-5000680da6ea" (UID: "f3b04bfb-68de-453f-b4d7-5000680da6ea"). InnerVolumeSpecName "kube-api-access-5gddg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.815500 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gddg\" (UniqueName: \"kubernetes.io/projected/f3b04bfb-68de-453f-b4d7-5000680da6ea-kube-api-access-5gddg\") on node \"crc\" DevicePath \"\"" Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.815538 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.147205 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerID="b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04" exitCode=0 Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.147248 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wsqw" event={"ID":"f3b04bfb-68de-453f-b4d7-5000680da6ea","Type":"ContainerDied","Data":"b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04"} Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.147276 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wsqw" event={"ID":"f3b04bfb-68de-453f-b4d7-5000680da6ea","Type":"ContainerDied","Data":"5407d7ab986b7fae2d51fbd84ff48673cbe5fbf1ed74c93e833605b4bda3b44a"} Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.147292 4804 scope.go:117] "RemoveContainer" containerID="b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.147409 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.169558 4804 scope.go:117] "RemoveContainer" containerID="4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.176460 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3b04bfb-68de-453f-b4d7-5000680da6ea" (UID: "f3b04bfb-68de-453f-b4d7-5000680da6ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.190772 4804 scope.go:117] "RemoveContainer" containerID="9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.210623 4804 scope.go:117] "RemoveContainer" containerID="b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04" Jan 28 12:06:42 crc kubenswrapper[4804]: E0128 12:06:42.211188 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04\": container with ID starting with b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04 not found: ID does not exist" containerID="b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.211238 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04"} err="failed to get container status \"b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04\": rpc error: code = NotFound desc = could not find container \"b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04\": container with ID starting with b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04 not found: ID does not exist" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.211272 4804 scope.go:117] "RemoveContainer" containerID="4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340" Jan 28 12:06:42 crc kubenswrapper[4804]: E0128 12:06:42.211670 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340\": container with ID starting with 4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340 not found: ID does not exist" containerID="4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.211704 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340"} err="failed to get container status \"4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340\": rpc error: code = NotFound desc = could not find container \"4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340\": container with ID starting with 4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340 not found: ID does not exist" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.211730 4804 scope.go:117] "RemoveContainer" containerID="9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b" Jan 28 12:06:42 crc kubenswrapper[4804]: E0128 12:06:42.212099 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b\": container with ID starting with 9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b not found: ID does not exist" containerID="9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.212131 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b"} err="failed to get container status \"9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b\": rpc error: code = NotFound desc = could not find container \"9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b\": container with ID starting with 9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b not found: ID does not exist" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.220281 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.481973 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wsqw"] Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.487906 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5wsqw"] Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.925960 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" path="/var/lib/kubelet/pods/f3b04bfb-68de-453f-b4d7-5000680da6ea/volumes" Jan 28 12:07:12 crc kubenswrapper[4804]: I0128 12:07:12.582027 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:07:12 crc kubenswrapper[4804]: I0128 12:07:12.582637 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:07:42 crc kubenswrapper[4804]: I0128 12:07:42.582757 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:07:42 crc kubenswrapper[4804]: I0128 12:07:42.584412 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.234533 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pcp4d"] Jan 28 12:07:51 crc kubenswrapper[4804]: E0128 12:07:51.236230 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="extract-content" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.236260 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="extract-content" Jan 28 12:07:51 crc kubenswrapper[4804]: E0128 12:07:51.236270 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="registry-server" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.236277 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="registry-server" Jan 28 12:07:51 crc kubenswrapper[4804]: E0128 12:07:51.236291 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="extract-utilities" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.236297 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="extract-utilities" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.239089 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="registry-server" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.240387 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pcp4d"] Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.240486 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.321998 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t64fs\" (UniqueName: \"kubernetes.io/projected/182a1540-9bf9-4275-bed6-695b4543de27-kube-api-access-t64fs\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.322405 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-utilities\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.322461 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-catalog-content\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.423822 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t64fs\" (UniqueName: \"kubernetes.io/projected/182a1540-9bf9-4275-bed6-695b4543de27-kube-api-access-t64fs\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.423898 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-utilities\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.423981 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-catalog-content\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.424324 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-utilities\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.424475 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-catalog-content\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.444655 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t64fs\" (UniqueName: \"kubernetes.io/projected/182a1540-9bf9-4275-bed6-695b4543de27-kube-api-access-t64fs\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.561423 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:52 crc kubenswrapper[4804]: I0128 12:07:52.081781 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pcp4d"] Jan 28 12:07:52 crc kubenswrapper[4804]: I0128 12:07:52.655760 4804 generic.go:334] "Generic (PLEG): container finished" podID="182a1540-9bf9-4275-bed6-695b4543de27" containerID="a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c" exitCode=0 Jan 28 12:07:52 crc kubenswrapper[4804]: I0128 12:07:52.655849 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcp4d" event={"ID":"182a1540-9bf9-4275-bed6-695b4543de27","Type":"ContainerDied","Data":"a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c"} Jan 28 12:07:52 crc kubenswrapper[4804]: I0128 12:07:52.655922 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcp4d" event={"ID":"182a1540-9bf9-4275-bed6-695b4543de27","Type":"ContainerStarted","Data":"32cc816914e97792913704f608113a3fb2772fa49fbca79479c33e5a8e0aba84"} Jan 28 12:07:54 crc kubenswrapper[4804]: I0128 12:07:54.671431 4804 generic.go:334] "Generic (PLEG): container finished" podID="182a1540-9bf9-4275-bed6-695b4543de27" containerID="0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907" exitCode=0 Jan 28 12:07:54 crc kubenswrapper[4804]: I0128 12:07:54.671503 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcp4d" event={"ID":"182a1540-9bf9-4275-bed6-695b4543de27","Type":"ContainerDied","Data":"0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907"} Jan 28 12:07:55 crc kubenswrapper[4804]: I0128 12:07:55.681562 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcp4d" event={"ID":"182a1540-9bf9-4275-bed6-695b4543de27","Type":"ContainerStarted","Data":"83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7"} Jan 28 12:07:55 crc kubenswrapper[4804]: I0128 12:07:55.702003 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pcp4d" podStartSLOduration=2.213596177 podStartE2EDuration="4.701984334s" podCreationTimestamp="2026-01-28 12:07:51 +0000 UTC" firstStartedPulling="2026-01-28 12:07:52.662544933 +0000 UTC m=+2748.457424917" lastFinishedPulling="2026-01-28 12:07:55.15093309 +0000 UTC m=+2750.945813074" observedRunningTime="2026-01-28 12:07:55.696906227 +0000 UTC m=+2751.491786211" watchObservedRunningTime="2026-01-28 12:07:55.701984334 +0000 UTC m=+2751.496864318" Jan 28 12:08:01 crc kubenswrapper[4804]: I0128 12:08:01.562014 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:08:01 crc kubenswrapper[4804]: I0128 12:08:01.562647 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:08:01 crc kubenswrapper[4804]: I0128 12:08:01.619460 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:08:01 crc kubenswrapper[4804]: I0128 12:08:01.751756 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:08:01 crc kubenswrapper[4804]: I0128 12:08:01.851249 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pcp4d"] Jan 28 12:08:03 crc kubenswrapper[4804]: I0128 12:08:03.728465 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pcp4d" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="registry-server" containerID="cri-o://83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7" gracePeriod=2 Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.118452 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.213630 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-catalog-content\") pod \"182a1540-9bf9-4275-bed6-695b4543de27\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.213737 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t64fs\" (UniqueName: \"kubernetes.io/projected/182a1540-9bf9-4275-bed6-695b4543de27-kube-api-access-t64fs\") pod \"182a1540-9bf9-4275-bed6-695b4543de27\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.213849 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-utilities\") pod \"182a1540-9bf9-4275-bed6-695b4543de27\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.214737 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-utilities" (OuterVolumeSpecName: "utilities") pod "182a1540-9bf9-4275-bed6-695b4543de27" (UID: "182a1540-9bf9-4275-bed6-695b4543de27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.219158 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/182a1540-9bf9-4275-bed6-695b4543de27-kube-api-access-t64fs" (OuterVolumeSpecName: "kube-api-access-t64fs") pod "182a1540-9bf9-4275-bed6-695b4543de27" (UID: "182a1540-9bf9-4275-bed6-695b4543de27"). InnerVolumeSpecName "kube-api-access-t64fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.269839 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "182a1540-9bf9-4275-bed6-695b4543de27" (UID: "182a1540-9bf9-4275-bed6-695b4543de27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.314828 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.315203 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.315219 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t64fs\" (UniqueName: \"kubernetes.io/projected/182a1540-9bf9-4275-bed6-695b4543de27-kube-api-access-t64fs\") on node \"crc\" DevicePath \"\"" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.737254 4804 generic.go:334] "Generic (PLEG): container finished" podID="182a1540-9bf9-4275-bed6-695b4543de27" containerID="83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7" exitCode=0 Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.737300 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcp4d" event={"ID":"182a1540-9bf9-4275-bed6-695b4543de27","Type":"ContainerDied","Data":"83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7"} Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.737359 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcp4d" event={"ID":"182a1540-9bf9-4275-bed6-695b4543de27","Type":"ContainerDied","Data":"32cc816914e97792913704f608113a3fb2772fa49fbca79479c33e5a8e0aba84"} Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.737365 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.737378 4804 scope.go:117] "RemoveContainer" containerID="83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.756739 4804 scope.go:117] "RemoveContainer" containerID="0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.771218 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pcp4d"] Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.776538 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pcp4d"] Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.804072 4804 scope.go:117] "RemoveContainer" containerID="a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.821743 4804 scope.go:117] "RemoveContainer" containerID="83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7" Jan 28 12:08:04 crc kubenswrapper[4804]: E0128 12:08:04.822466 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7\": container with ID starting with 83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7 not found: ID does not exist" containerID="83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.822521 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7"} err="failed to get container status \"83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7\": rpc error: code = NotFound desc = could not find container \"83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7\": container with ID starting with 83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7 not found: ID does not exist" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.822555 4804 scope.go:117] "RemoveContainer" containerID="0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907" Jan 28 12:08:04 crc kubenswrapper[4804]: E0128 12:08:04.823016 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907\": container with ID starting with 0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907 not found: ID does not exist" containerID="0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.823051 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907"} err="failed to get container status \"0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907\": rpc error: code = NotFound desc = could not find container \"0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907\": container with ID starting with 0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907 not found: ID does not exist" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.823073 4804 scope.go:117] "RemoveContainer" containerID="a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c" Jan 28 12:08:04 crc kubenswrapper[4804]: E0128 12:08:04.823372 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c\": container with ID starting with a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c not found: ID does not exist" containerID="a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.823416 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c"} err="failed to get container status \"a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c\": rpc error: code = NotFound desc = could not find container \"a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c\": container with ID starting with a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c not found: ID does not exist" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.925256 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="182a1540-9bf9-4275-bed6-695b4543de27" path="/var/lib/kubelet/pods/182a1540-9bf9-4275-bed6-695b4543de27/volumes" Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.582692 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.583248 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.583296 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.583932 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"566a846233c878f366e78fe6b0eb0d66e44ee38fa76277e0cb50bdcb6cf30c2e"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.583989 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://566a846233c878f366e78fe6b0eb0d66e44ee38fa76277e0cb50bdcb6cf30c2e" gracePeriod=600 Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.784856 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="566a846233c878f366e78fe6b0eb0d66e44ee38fa76277e0cb50bdcb6cf30c2e" exitCode=0 Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.784916 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"566a846233c878f366e78fe6b0eb0d66e44ee38fa76277e0cb50bdcb6cf30c2e"} Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.784997 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:08:13 crc kubenswrapper[4804]: I0128 12:08:13.793078 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d"} Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.737223 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l5fsc"] Jan 28 12:09:23 crc kubenswrapper[4804]: E0128 12:09:23.738080 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="extract-content" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.738091 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="extract-content" Jan 28 12:09:23 crc kubenswrapper[4804]: E0128 12:09:23.738109 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="extract-utilities" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.738115 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="extract-utilities" Jan 28 12:09:23 crc kubenswrapper[4804]: E0128 12:09:23.738125 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="registry-server" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.738131 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="registry-server" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.738260 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="registry-server" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.739278 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.754275 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5fsc"] Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.842834 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-catalog-content\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.843210 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-utilities\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.843255 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-748rr\" (UniqueName: \"kubernetes.io/projected/86faab76-d908-4b49-85bf-e5209af19052-kube-api-access-748rr\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.942679 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d7nv2"] Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.944578 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-catalog-content\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.944640 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-utilities\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.944679 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-748rr\" (UniqueName: \"kubernetes.io/projected/86faab76-d908-4b49-85bf-e5209af19052-kube-api-access-748rr\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.945068 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.946729 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-utilities\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.946939 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-catalog-content\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.957911 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7nv2"] Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.970014 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-748rr\" (UniqueName: \"kubernetes.io/projected/86faab76-d908-4b49-85bf-e5209af19052-kube-api-access-748rr\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.046398 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdkzh\" (UniqueName: \"kubernetes.io/projected/38e47861-4801-4654-b1df-0638f0e86369-kube-api-access-kdkzh\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.046477 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-catalog-content\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.046500 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-utilities\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.059221 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.149710 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdkzh\" (UniqueName: \"kubernetes.io/projected/38e47861-4801-4654-b1df-0638f0e86369-kube-api-access-kdkzh\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.149831 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-catalog-content\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.149864 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-utilities\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.150448 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-utilities\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.150497 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-catalog-content\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.173722 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdkzh\" (UniqueName: \"kubernetes.io/projected/38e47861-4801-4654-b1df-0638f0e86369-kube-api-access-kdkzh\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.257344 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.505079 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7nv2"] Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.518524 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5fsc"] Jan 28 12:09:24 crc kubenswrapper[4804]: W0128 12:09:24.530966 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86faab76_d908_4b49_85bf_e5209af19052.slice/crio-b3d630c7f4e2fb1ffe1922156c431b80caea5182c143c51692b83916473b16fc WatchSource:0}: Error finding container b3d630c7f4e2fb1ffe1922156c431b80caea5182c143c51692b83916473b16fc: Status 404 returned error can't find the container with id b3d630c7f4e2fb1ffe1922156c431b80caea5182c143c51692b83916473b16fc Jan 28 12:09:25 crc kubenswrapper[4804]: I0128 12:09:25.259061 4804 generic.go:334] "Generic (PLEG): container finished" podID="86faab76-d908-4b49-85bf-e5209af19052" containerID="cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462" exitCode=0 Jan 28 12:09:25 crc kubenswrapper[4804]: I0128 12:09:25.259216 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5fsc" event={"ID":"86faab76-d908-4b49-85bf-e5209af19052","Type":"ContainerDied","Data":"cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462"} Jan 28 12:09:25 crc kubenswrapper[4804]: I0128 12:09:25.259678 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5fsc" event={"ID":"86faab76-d908-4b49-85bf-e5209af19052","Type":"ContainerStarted","Data":"b3d630c7f4e2fb1ffe1922156c431b80caea5182c143c51692b83916473b16fc"} Jan 28 12:09:25 crc kubenswrapper[4804]: I0128 12:09:25.264995 4804 generic.go:334] "Generic (PLEG): container finished" podID="38e47861-4801-4654-b1df-0638f0e86369" containerID="e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372" exitCode=0 Jan 28 12:09:25 crc kubenswrapper[4804]: I0128 12:09:25.265057 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerDied","Data":"e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372"} Jan 28 12:09:25 crc kubenswrapper[4804]: I0128 12:09:25.265090 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerStarted","Data":"ae90a840031eaca3f34697b26474729a2d6ca70016fb2e230f6362627e5d39d2"} Jan 28 12:09:26 crc kubenswrapper[4804]: I0128 12:09:26.274057 4804 generic.go:334] "Generic (PLEG): container finished" podID="86faab76-d908-4b49-85bf-e5209af19052" containerID="de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b" exitCode=0 Jan 28 12:09:26 crc kubenswrapper[4804]: I0128 12:09:26.274169 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5fsc" event={"ID":"86faab76-d908-4b49-85bf-e5209af19052","Type":"ContainerDied","Data":"de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b"} Jan 28 12:09:26 crc kubenswrapper[4804]: I0128 12:09:26.281570 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerStarted","Data":"c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de"} Jan 28 12:09:27 crc kubenswrapper[4804]: I0128 12:09:27.293695 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5fsc" event={"ID":"86faab76-d908-4b49-85bf-e5209af19052","Type":"ContainerStarted","Data":"8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1"} Jan 28 12:09:27 crc kubenswrapper[4804]: I0128 12:09:27.295954 4804 generic.go:334] "Generic (PLEG): container finished" podID="38e47861-4801-4654-b1df-0638f0e86369" containerID="c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de" exitCode=0 Jan 28 12:09:27 crc kubenswrapper[4804]: I0128 12:09:27.295992 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerDied","Data":"c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de"} Jan 28 12:09:27 crc kubenswrapper[4804]: I0128 12:09:27.327806 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l5fsc" podStartSLOduration=2.780763035 podStartE2EDuration="4.327765558s" podCreationTimestamp="2026-01-28 12:09:23 +0000 UTC" firstStartedPulling="2026-01-28 12:09:25.265596863 +0000 UTC m=+2841.060476847" lastFinishedPulling="2026-01-28 12:09:26.812599386 +0000 UTC m=+2842.607479370" observedRunningTime="2026-01-28 12:09:27.316796678 +0000 UTC m=+2843.111676662" watchObservedRunningTime="2026-01-28 12:09:27.327765558 +0000 UTC m=+2843.122645572" Jan 28 12:09:28 crc kubenswrapper[4804]: I0128 12:09:28.307372 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerStarted","Data":"b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506"} Jan 28 12:09:28 crc kubenswrapper[4804]: I0128 12:09:28.328971 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d7nv2" podStartSLOduration=2.908322672 podStartE2EDuration="5.328937577s" podCreationTimestamp="2026-01-28 12:09:23 +0000 UTC" firstStartedPulling="2026-01-28 12:09:25.267830762 +0000 UTC m=+2841.062710746" lastFinishedPulling="2026-01-28 12:09:27.688445627 +0000 UTC m=+2843.483325651" observedRunningTime="2026-01-28 12:09:28.327048708 +0000 UTC m=+2844.121928712" watchObservedRunningTime="2026-01-28 12:09:28.328937577 +0000 UTC m=+2844.123817571" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.059778 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.060664 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.118461 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.258112 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.258554 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.299131 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.386847 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.399654 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.926656 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5fsc"] Jan 28 12:09:35 crc kubenswrapper[4804]: I0128 12:09:35.125221 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7nv2"] Jan 28 12:09:36 crc kubenswrapper[4804]: I0128 12:09:36.358763 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l5fsc" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="registry-server" containerID="cri-o://8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1" gracePeriod=2 Jan 28 12:09:36 crc kubenswrapper[4804]: I0128 12:09:36.358871 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d7nv2" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="registry-server" containerID="cri-o://b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506" gracePeriod=2 Jan 28 12:09:36 crc kubenswrapper[4804]: I0128 12:09:36.885734 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:36 crc kubenswrapper[4804]: I0128 12:09:36.892995 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.035545 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-748rr\" (UniqueName: \"kubernetes.io/projected/86faab76-d908-4b49-85bf-e5209af19052-kube-api-access-748rr\") pod \"86faab76-d908-4b49-85bf-e5209af19052\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.035612 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-utilities\") pod \"38e47861-4801-4654-b1df-0638f0e86369\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.035631 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-catalog-content\") pod \"38e47861-4801-4654-b1df-0638f0e86369\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.035753 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-utilities\") pod \"86faab76-d908-4b49-85bf-e5209af19052\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.035779 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-catalog-content\") pod \"86faab76-d908-4b49-85bf-e5209af19052\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.035801 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdkzh\" (UniqueName: \"kubernetes.io/projected/38e47861-4801-4654-b1df-0638f0e86369-kube-api-access-kdkzh\") pod \"38e47861-4801-4654-b1df-0638f0e86369\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.036486 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-utilities" (OuterVolumeSpecName: "utilities") pod "86faab76-d908-4b49-85bf-e5209af19052" (UID: "86faab76-d908-4b49-85bf-e5209af19052"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.036536 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-utilities" (OuterVolumeSpecName: "utilities") pod "38e47861-4801-4654-b1df-0638f0e86369" (UID: "38e47861-4801-4654-b1df-0638f0e86369"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.037614 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.037712 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.041028 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e47861-4801-4654-b1df-0638f0e86369-kube-api-access-kdkzh" (OuterVolumeSpecName: "kube-api-access-kdkzh") pod "38e47861-4801-4654-b1df-0638f0e86369" (UID: "38e47861-4801-4654-b1df-0638f0e86369"). InnerVolumeSpecName "kube-api-access-kdkzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.041119 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86faab76-d908-4b49-85bf-e5209af19052-kube-api-access-748rr" (OuterVolumeSpecName: "kube-api-access-748rr") pod "86faab76-d908-4b49-85bf-e5209af19052" (UID: "86faab76-d908-4b49-85bf-e5209af19052"). InnerVolumeSpecName "kube-api-access-748rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.139464 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdkzh\" (UniqueName: \"kubernetes.io/projected/38e47861-4801-4654-b1df-0638f0e86369-kube-api-access-kdkzh\") on node \"crc\" DevicePath \"\"" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.139503 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-748rr\" (UniqueName: \"kubernetes.io/projected/86faab76-d908-4b49-85bf-e5209af19052-kube-api-access-748rr\") on node \"crc\" DevicePath \"\"" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.367534 4804 generic.go:334] "Generic (PLEG): container finished" podID="86faab76-d908-4b49-85bf-e5209af19052" containerID="8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1" exitCode=0 Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.367572 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5fsc" event={"ID":"86faab76-d908-4b49-85bf-e5209af19052","Type":"ContainerDied","Data":"8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1"} Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.367617 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5fsc" event={"ID":"86faab76-d908-4b49-85bf-e5209af19052","Type":"ContainerDied","Data":"b3d630c7f4e2fb1ffe1922156c431b80caea5182c143c51692b83916473b16fc"} Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.367649 4804 scope.go:117] "RemoveContainer" containerID="8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.369365 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.372039 4804 generic.go:334] "Generic (PLEG): container finished" podID="38e47861-4801-4654-b1df-0638f0e86369" containerID="b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506" exitCode=0 Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.372162 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerDied","Data":"b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506"} Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.372344 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerDied","Data":"ae90a840031eaca3f34697b26474729a2d6ca70016fb2e230f6362627e5d39d2"} Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.372213 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.385815 4804 scope.go:117] "RemoveContainer" containerID="de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.401011 4804 scope.go:117] "RemoveContainer" containerID="cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.414797 4804 scope.go:117] "RemoveContainer" containerID="8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1" Jan 28 12:09:37 crc kubenswrapper[4804]: E0128 12:09:37.415123 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1\": container with ID starting with 8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1 not found: ID does not exist" containerID="8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.415164 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1"} err="failed to get container status \"8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1\": rpc error: code = NotFound desc = could not find container \"8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1\": container with ID starting with 8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1 not found: ID does not exist" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.415190 4804 scope.go:117] "RemoveContainer" containerID="de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b" Jan 28 12:09:37 crc kubenswrapper[4804]: E0128 12:09:37.415565 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b\": container with ID starting with de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b not found: ID does not exist" containerID="de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.415596 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b"} err="failed to get container status \"de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b\": rpc error: code = NotFound desc = could not find container \"de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b\": container with ID starting with de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b not found: ID does not exist" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.415611 4804 scope.go:117] "RemoveContainer" containerID="cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462" Jan 28 12:09:37 crc kubenswrapper[4804]: E0128 12:09:37.415911 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462\": container with ID starting with cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462 not found: ID does not exist" containerID="cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.415938 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462"} err="failed to get container status \"cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462\": rpc error: code = NotFound desc = could not find container \"cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462\": container with ID starting with cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462 not found: ID does not exist" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.415953 4804 scope.go:117] "RemoveContainer" containerID="b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.428178 4804 scope.go:117] "RemoveContainer" containerID="c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.442470 4804 scope.go:117] "RemoveContainer" containerID="e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.456032 4804 scope.go:117] "RemoveContainer" containerID="b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506" Jan 28 12:09:37 crc kubenswrapper[4804]: E0128 12:09:37.456401 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506\": container with ID starting with b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506 not found: ID does not exist" containerID="b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.456433 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506"} err="failed to get container status \"b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506\": rpc error: code = NotFound desc = could not find container \"b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506\": container with ID starting with b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506 not found: ID does not exist" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.456458 4804 scope.go:117] "RemoveContainer" containerID="c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de" Jan 28 12:09:37 crc kubenswrapper[4804]: E0128 12:09:37.456821 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de\": container with ID starting with c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de not found: ID does not exist" containerID="c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.456839 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de"} err="failed to get container status \"c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de\": rpc error: code = NotFound desc = could not find container \"c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de\": container with ID starting with c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de not found: ID does not exist" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.456854 4804 scope.go:117] "RemoveContainer" containerID="e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372" Jan 28 12:09:37 crc kubenswrapper[4804]: E0128 12:09:37.457231 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372\": container with ID starting with e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372 not found: ID does not exist" containerID="e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.457254 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372"} err="failed to get container status \"e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372\": rpc error: code = NotFound desc = could not find container \"e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372\": container with ID starting with e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372 not found: ID does not exist" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.944802 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86faab76-d908-4b49-85bf-e5209af19052" (UID: "86faab76-d908-4b49-85bf-e5209af19052"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.949674 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.007626 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5fsc"] Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.014722 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5fsc"] Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.495145 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38e47861-4801-4654-b1df-0638f0e86369" (UID: "38e47861-4801-4654-b1df-0638f0e86369"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.560004 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.610139 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7nv2"] Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.616567 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d7nv2"] Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.942217 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e47861-4801-4654-b1df-0638f0e86369" path="/var/lib/kubelet/pods/38e47861-4801-4654-b1df-0638f0e86369/volumes" Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.942940 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86faab76-d908-4b49-85bf-e5209af19052" path="/var/lib/kubelet/pods/86faab76-d908-4b49-85bf-e5209af19052/volumes" Jan 28 12:10:12 crc kubenswrapper[4804]: I0128 12:10:12.581676 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:10:12 crc kubenswrapper[4804]: I0128 12:10:12.582223 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:10:42 crc kubenswrapper[4804]: I0128 12:10:42.582091 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:10:42 crc kubenswrapper[4804]: I0128 12:10:42.582703 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:11:12 crc kubenswrapper[4804]: I0128 12:11:12.583085 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:11:12 crc kubenswrapper[4804]: I0128 12:11:12.583643 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:11:12 crc kubenswrapper[4804]: I0128 12:11:12.584113 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:11:12 crc kubenswrapper[4804]: I0128 12:11:12.584824 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:11:12 crc kubenswrapper[4804]: I0128 12:11:12.584877 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" gracePeriod=600 Jan 28 12:11:12 crc kubenswrapper[4804]: E0128 12:11:12.716909 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:11:13 crc kubenswrapper[4804]: I0128 12:11:13.069234 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" exitCode=0 Jan 28 12:11:13 crc kubenswrapper[4804]: I0128 12:11:13.069296 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d"} Jan 28 12:11:13 crc kubenswrapper[4804]: I0128 12:11:13.069330 4804 scope.go:117] "RemoveContainer" containerID="566a846233c878f366e78fe6b0eb0d66e44ee38fa76277e0cb50bdcb6cf30c2e" Jan 28 12:11:13 crc kubenswrapper[4804]: I0128 12:11:13.069706 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:11:13 crc kubenswrapper[4804]: E0128 12:11:13.069937 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:11:27 crc kubenswrapper[4804]: I0128 12:11:27.914969 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:11:27 crc kubenswrapper[4804]: E0128 12:11:27.915821 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:11:42 crc kubenswrapper[4804]: I0128 12:11:42.914839 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:11:42 crc kubenswrapper[4804]: E0128 12:11:42.916536 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:11:54 crc kubenswrapper[4804]: I0128 12:11:54.918399 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:11:54 crc kubenswrapper[4804]: E0128 12:11:54.920710 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:12:08 crc kubenswrapper[4804]: I0128 12:12:08.914924 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:12:08 crc kubenswrapper[4804]: E0128 12:12:08.915702 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:12:19 crc kubenswrapper[4804]: I0128 12:12:19.915141 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:12:19 crc kubenswrapper[4804]: E0128 12:12:19.915949 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:12:32 crc kubenswrapper[4804]: I0128 12:12:32.915712 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:12:32 crc kubenswrapper[4804]: E0128 12:12:32.916920 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:12:47 crc kubenswrapper[4804]: I0128 12:12:47.914817 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:12:47 crc kubenswrapper[4804]: E0128 12:12:47.915540 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:12:58 crc kubenswrapper[4804]: I0128 12:12:58.914845 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:12:58 crc kubenswrapper[4804]: E0128 12:12:58.915738 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:13:12 crc kubenswrapper[4804]: I0128 12:13:12.915158 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:13:12 crc kubenswrapper[4804]: E0128 12:13:12.916327 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:13:26 crc kubenswrapper[4804]: I0128 12:13:26.915324 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:13:26 crc kubenswrapper[4804]: E0128 12:13:26.916634 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:13:38 crc kubenswrapper[4804]: I0128 12:13:38.915825 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:13:38 crc kubenswrapper[4804]: E0128 12:13:38.916749 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:13:49 crc kubenswrapper[4804]: I0128 12:13:49.914727 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:13:49 crc kubenswrapper[4804]: E0128 12:13:49.915516 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:14:00 crc kubenswrapper[4804]: I0128 12:14:00.914843 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:14:00 crc kubenswrapper[4804]: E0128 12:14:00.916305 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:14:12 crc kubenswrapper[4804]: I0128 12:14:12.915724 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:14:12 crc kubenswrapper[4804]: E0128 12:14:12.916508 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:14:26 crc kubenswrapper[4804]: I0128 12:14:26.915068 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:14:26 crc kubenswrapper[4804]: E0128 12:14:26.915689 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:14:37 crc kubenswrapper[4804]: I0128 12:14:37.914949 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:14:37 crc kubenswrapper[4804]: E0128 12:14:37.915670 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:14:52 crc kubenswrapper[4804]: I0128 12:14:52.915724 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:14:52 crc kubenswrapper[4804]: E0128 12:14:52.916300 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.143087 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6"] Jan 28 12:15:00 crc kubenswrapper[4804]: E0128 12:15:00.146012 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="extract-utilities" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146034 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="extract-utilities" Jan 28 12:15:00 crc kubenswrapper[4804]: E0128 12:15:00.146052 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="extract-content" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146061 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="extract-content" Jan 28 12:15:00 crc kubenswrapper[4804]: E0128 12:15:00.146072 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="extract-utilities" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146080 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="extract-utilities" Jan 28 12:15:00 crc kubenswrapper[4804]: E0128 12:15:00.146091 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="registry-server" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146099 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="registry-server" Jan 28 12:15:00 crc kubenswrapper[4804]: E0128 12:15:00.146113 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="registry-server" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146120 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="registry-server" Jan 28 12:15:00 crc kubenswrapper[4804]: E0128 12:15:00.146134 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="extract-content" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146144 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="extract-content" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146305 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="registry-server" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146331 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="registry-server" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.149027 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.150806 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6"] Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.150993 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.151077 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.341461 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd5d65a2-f669-4c73-a215-c2cc62d5642f-secret-volume\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.341792 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd5d65a2-f669-4c73-a215-c2cc62d5642f-config-volume\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.341813 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj9fv\" (UniqueName: \"kubernetes.io/projected/cd5d65a2-f669-4c73-a215-c2cc62d5642f-kube-api-access-kj9fv\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.442672 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd5d65a2-f669-4c73-a215-c2cc62d5642f-secret-volume\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.442972 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd5d65a2-f669-4c73-a215-c2cc62d5642f-config-volume\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.443091 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj9fv\" (UniqueName: \"kubernetes.io/projected/cd5d65a2-f669-4c73-a215-c2cc62d5642f-kube-api-access-kj9fv\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.443997 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd5d65a2-f669-4c73-a215-c2cc62d5642f-config-volume\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.449765 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd5d65a2-f669-4c73-a215-c2cc62d5642f-secret-volume\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.459447 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj9fv\" (UniqueName: \"kubernetes.io/projected/cd5d65a2-f669-4c73-a215-c2cc62d5642f-kube-api-access-kj9fv\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.472129 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.864306 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6"] Jan 28 12:15:01 crc kubenswrapper[4804]: I0128 12:15:01.778334 4804 generic.go:334] "Generic (PLEG): container finished" podID="cd5d65a2-f669-4c73-a215-c2cc62d5642f" containerID="c55fea870228d7c60c4dcade51769d821b6a45662de1315a0385d6343440a705" exitCode=0 Jan 28 12:15:01 crc kubenswrapper[4804]: I0128 12:15:01.778385 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" event={"ID":"cd5d65a2-f669-4c73-a215-c2cc62d5642f","Type":"ContainerDied","Data":"c55fea870228d7c60c4dcade51769d821b6a45662de1315a0385d6343440a705"} Jan 28 12:15:01 crc kubenswrapper[4804]: I0128 12:15:01.778415 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" event={"ID":"cd5d65a2-f669-4c73-a215-c2cc62d5642f","Type":"ContainerStarted","Data":"1bcbd689d8193d64f53f494587e7dfb627add9696efda821d34abe4a7d007353"} Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.034486 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.084107 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd5d65a2-f669-4c73-a215-c2cc62d5642f-config-volume\") pod \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.084230 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd5d65a2-f669-4c73-a215-c2cc62d5642f-secret-volume\") pod \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.084270 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj9fv\" (UniqueName: \"kubernetes.io/projected/cd5d65a2-f669-4c73-a215-c2cc62d5642f-kube-api-access-kj9fv\") pod \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.085543 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd5d65a2-f669-4c73-a215-c2cc62d5642f-config-volume" (OuterVolumeSpecName: "config-volume") pod "cd5d65a2-f669-4c73-a215-c2cc62d5642f" (UID: "cd5d65a2-f669-4c73-a215-c2cc62d5642f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.089647 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5d65a2-f669-4c73-a215-c2cc62d5642f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cd5d65a2-f669-4c73-a215-c2cc62d5642f" (UID: "cd5d65a2-f669-4c73-a215-c2cc62d5642f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.089678 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5d65a2-f669-4c73-a215-c2cc62d5642f-kube-api-access-kj9fv" (OuterVolumeSpecName: "kube-api-access-kj9fv") pod "cd5d65a2-f669-4c73-a215-c2cc62d5642f" (UID: "cd5d65a2-f669-4c73-a215-c2cc62d5642f"). InnerVolumeSpecName "kube-api-access-kj9fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.185812 4804 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd5d65a2-f669-4c73-a215-c2cc62d5642f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.185846 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj9fv\" (UniqueName: \"kubernetes.io/projected/cd5d65a2-f669-4c73-a215-c2cc62d5642f-kube-api-access-kj9fv\") on node \"crc\" DevicePath \"\"" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.185855 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd5d65a2-f669-4c73-a215-c2cc62d5642f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.797692 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.797765 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" event={"ID":"cd5d65a2-f669-4c73-a215-c2cc62d5642f","Type":"ContainerDied","Data":"1bcbd689d8193d64f53f494587e7dfb627add9696efda821d34abe4a7d007353"} Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.798321 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bcbd689d8193d64f53f494587e7dfb627add9696efda821d34abe4a7d007353" Jan 28 12:15:04 crc kubenswrapper[4804]: I0128 12:15:04.099282 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5"] Jan 28 12:15:04 crc kubenswrapper[4804]: I0128 12:15:04.104074 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5"] Jan 28 12:15:04 crc kubenswrapper[4804]: I0128 12:15:04.926074 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83929dab-2f27-41a0-aaea-ec500ff4b6e7" path="/var/lib/kubelet/pods/83929dab-2f27-41a0-aaea-ec500ff4b6e7/volumes" Jan 28 12:15:05 crc kubenswrapper[4804]: I0128 12:15:05.914919 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:15:05 crc kubenswrapper[4804]: E0128 12:15:05.915159 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:15:18 crc kubenswrapper[4804]: I0128 12:15:18.914919 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:15:18 crc kubenswrapper[4804]: E0128 12:15:18.916031 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:15:21 crc kubenswrapper[4804]: I0128 12:15:21.685566 4804 scope.go:117] "RemoveContainer" containerID="647a49fa2b0ef181a7c4caad26f72973e736662092d9439165eb23246f60d551" Jan 28 12:15:31 crc kubenswrapper[4804]: I0128 12:15:31.915785 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:15:31 crc kubenswrapper[4804]: E0128 12:15:31.916567 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:15:42 crc kubenswrapper[4804]: I0128 12:15:42.916083 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:15:42 crc kubenswrapper[4804]: E0128 12:15:42.916988 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:15:53 crc kubenswrapper[4804]: I0128 12:15:53.915339 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:15:53 crc kubenswrapper[4804]: E0128 12:15:53.916111 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:16:08 crc kubenswrapper[4804]: I0128 12:16:08.918313 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:16:08 crc kubenswrapper[4804]: E0128 12:16:08.921382 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:16:21 crc kubenswrapper[4804]: I0128 12:16:21.915253 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:16:22 crc kubenswrapper[4804]: I0128 12:16:22.178431 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"bc09c3a58bfeacbb95f858a207ee4e75804e1451287317e8d420ed980a50ed4e"} Jan 28 12:17:47 crc kubenswrapper[4804]: I0128 12:17:47.959826 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n96p5"] Jan 28 12:17:47 crc kubenswrapper[4804]: E0128 12:17:47.960855 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5d65a2-f669-4c73-a215-c2cc62d5642f" containerName="collect-profiles" Jan 28 12:17:47 crc kubenswrapper[4804]: I0128 12:17:47.960913 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5d65a2-f669-4c73-a215-c2cc62d5642f" containerName="collect-profiles" Jan 28 12:17:47 crc kubenswrapper[4804]: I0128 12:17:47.961166 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5d65a2-f669-4c73-a215-c2cc62d5642f" containerName="collect-profiles" Jan 28 12:17:47 crc kubenswrapper[4804]: I0128 12:17:47.963910 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:47 crc kubenswrapper[4804]: I0128 12:17:47.968850 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n96p5"] Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.102224 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f96td\" (UniqueName: \"kubernetes.io/projected/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-kube-api-access-f96td\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.102398 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-utilities\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.102457 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-catalog-content\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.203562 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-utilities\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.203624 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-catalog-content\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.203705 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f96td\" (UniqueName: \"kubernetes.io/projected/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-kube-api-access-f96td\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.204202 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-utilities\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.204232 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-catalog-content\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.221630 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f96td\" (UniqueName: \"kubernetes.io/projected/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-kube-api-access-f96td\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.282245 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.805043 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n96p5"] Jan 28 12:17:49 crc kubenswrapper[4804]: I0128 12:17:49.815368 4804 generic.go:334] "Generic (PLEG): container finished" podID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerID="2efbc03490ed43572699ec444996e43e335aa0f68aab150fa2a1ae8f5fa13a00" exitCode=0 Jan 28 12:17:49 crc kubenswrapper[4804]: I0128 12:17:49.815430 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerDied","Data":"2efbc03490ed43572699ec444996e43e335aa0f68aab150fa2a1ae8f5fa13a00"} Jan 28 12:17:49 crc kubenswrapper[4804]: I0128 12:17:49.815917 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerStarted","Data":"1461707ab43564a244305b14d5c52007e50423eb9fb018aeff25d988a85fcd4d"} Jan 28 12:17:49 crc kubenswrapper[4804]: I0128 12:17:49.818738 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 12:17:50 crc kubenswrapper[4804]: I0128 12:17:50.824842 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerStarted","Data":"d39d3f3d9e734119d1731a4b66194da17cd232e9e7b4df4c0d4594356663df0d"} Jan 28 12:17:51 crc kubenswrapper[4804]: I0128 12:17:51.834541 4804 generic.go:334] "Generic (PLEG): container finished" podID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerID="d39d3f3d9e734119d1731a4b66194da17cd232e9e7b4df4c0d4594356663df0d" exitCode=0 Jan 28 12:17:51 crc kubenswrapper[4804]: I0128 12:17:51.834618 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerDied","Data":"d39d3f3d9e734119d1731a4b66194da17cd232e9e7b4df4c0d4594356663df0d"} Jan 28 12:17:51 crc kubenswrapper[4804]: I0128 12:17:51.834752 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerStarted","Data":"d6a3a8de520aca058f2d55e0c6ccaa8d7dbb2775150ff351e3a1e5c747073409"} Jan 28 12:17:51 crc kubenswrapper[4804]: I0128 12:17:51.856695 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n96p5" podStartSLOduration=3.410841289 podStartE2EDuration="4.856677599s" podCreationTimestamp="2026-01-28 12:17:47 +0000 UTC" firstStartedPulling="2026-01-28 12:17:49.818306245 +0000 UTC m=+3345.613186229" lastFinishedPulling="2026-01-28 12:17:51.264142555 +0000 UTC m=+3347.059022539" observedRunningTime="2026-01-28 12:17:51.855284976 +0000 UTC m=+3347.650164980" watchObservedRunningTime="2026-01-28 12:17:51.856677599 +0000 UTC m=+3347.651557593" Jan 28 12:17:58 crc kubenswrapper[4804]: I0128 12:17:58.283255 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:58 crc kubenswrapper[4804]: I0128 12:17:58.283815 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:58 crc kubenswrapper[4804]: I0128 12:17:58.340117 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:58 crc kubenswrapper[4804]: I0128 12:17:58.932199 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:58 crc kubenswrapper[4804]: I0128 12:17:58.981119 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n96p5"] Jan 28 12:18:00 crc kubenswrapper[4804]: I0128 12:18:00.897463 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n96p5" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="registry-server" containerID="cri-o://d6a3a8de520aca058f2d55e0c6ccaa8d7dbb2775150ff351e3a1e5c747073409" gracePeriod=2 Jan 28 12:18:01 crc kubenswrapper[4804]: I0128 12:18:01.908362 4804 generic.go:334] "Generic (PLEG): container finished" podID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerID="d6a3a8de520aca058f2d55e0c6ccaa8d7dbb2775150ff351e3a1e5c747073409" exitCode=0 Jan 28 12:18:01 crc kubenswrapper[4804]: I0128 12:18:01.908513 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerDied","Data":"d6a3a8de520aca058f2d55e0c6ccaa8d7dbb2775150ff351e3a1e5c747073409"} Jan 28 12:18:01 crc kubenswrapper[4804]: I0128 12:18:01.908718 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerDied","Data":"1461707ab43564a244305b14d5c52007e50423eb9fb018aeff25d988a85fcd4d"} Jan 28 12:18:01 crc kubenswrapper[4804]: I0128 12:18:01.908745 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1461707ab43564a244305b14d5c52007e50423eb9fb018aeff25d988a85fcd4d" Jan 28 12:18:01 crc kubenswrapper[4804]: I0128 12:18:01.920073 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.025131 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-catalog-content\") pod \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.025207 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f96td\" (UniqueName: \"kubernetes.io/projected/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-kube-api-access-f96td\") pod \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.025254 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-utilities\") pod \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.026270 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-utilities" (OuterVolumeSpecName: "utilities") pod "227aeb2b-9a72-4194-8989-a1f38ed1c1fc" (UID: "227aeb2b-9a72-4194-8989-a1f38ed1c1fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.036679 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-kube-api-access-f96td" (OuterVolumeSpecName: "kube-api-access-f96td") pod "227aeb2b-9a72-4194-8989-a1f38ed1c1fc" (UID: "227aeb2b-9a72-4194-8989-a1f38ed1c1fc"). InnerVolumeSpecName "kube-api-access-f96td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.077808 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "227aeb2b-9a72-4194-8989-a1f38ed1c1fc" (UID: "227aeb2b-9a72-4194-8989-a1f38ed1c1fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.126845 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.126884 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f96td\" (UniqueName: \"kubernetes.io/projected/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-kube-api-access-f96td\") on node \"crc\" DevicePath \"\"" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.126935 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.915474 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.952072 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n96p5"] Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.958595 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n96p5"] Jan 28 12:18:04 crc kubenswrapper[4804]: I0128 12:18:04.931778 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" path="/var/lib/kubelet/pods/227aeb2b-9a72-4194-8989-a1f38ed1c1fc/volumes" Jan 28 12:18:42 crc kubenswrapper[4804]: I0128 12:18:42.582630 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:18:42 crc kubenswrapper[4804]: I0128 12:18:42.583345 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:19:12 crc kubenswrapper[4804]: I0128 12:19:12.582506 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:19:12 crc kubenswrapper[4804]: I0128 12:19:12.583169 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.402198 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r22km"] Jan 28 12:19:13 crc kubenswrapper[4804]: E0128 12:19:13.402727 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="extract-content" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.402738 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="extract-content" Jan 28 12:19:13 crc kubenswrapper[4804]: E0128 12:19:13.402767 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="extract-utilities" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.402774 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="extract-utilities" Jan 28 12:19:13 crc kubenswrapper[4804]: E0128 12:19:13.402783 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="registry-server" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.402790 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="registry-server" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.402952 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="registry-server" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.403955 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.431584 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r22km"] Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.594504 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-catalog-content\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.594932 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4thk\" (UniqueName: \"kubernetes.io/projected/5d905c97-1bab-4517-885a-c30ce8c59b3c-kube-api-access-b4thk\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.594978 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-utilities\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.696554 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-utilities\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.696685 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-catalog-content\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.696818 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-utilities\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.697130 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-catalog-content\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.697195 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4thk\" (UniqueName: \"kubernetes.io/projected/5d905c97-1bab-4517-885a-c30ce8c59b3c-kube-api-access-b4thk\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.727550 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4thk\" (UniqueName: \"kubernetes.io/projected/5d905c97-1bab-4517-885a-c30ce8c59b3c-kube-api-access-b4thk\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.739583 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:14 crc kubenswrapper[4804]: I0128 12:19:14.070751 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r22km"] Jan 28 12:19:14 crc kubenswrapper[4804]: I0128 12:19:14.424508 4804 generic.go:334] "Generic (PLEG): container finished" podID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerID="0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca" exitCode=0 Jan 28 12:19:14 crc kubenswrapper[4804]: I0128 12:19:14.424555 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerDied","Data":"0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca"} Jan 28 12:19:14 crc kubenswrapper[4804]: I0128 12:19:14.424797 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerStarted","Data":"456da1abdd937ecd19faef71326544de6e697da231fd39d63420c50ca22d3910"} Jan 28 12:19:15 crc kubenswrapper[4804]: I0128 12:19:15.436050 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerStarted","Data":"c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36"} Jan 28 12:19:16 crc kubenswrapper[4804]: I0128 12:19:16.445211 4804 generic.go:334] "Generic (PLEG): container finished" podID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerID="c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36" exitCode=0 Jan 28 12:19:16 crc kubenswrapper[4804]: I0128 12:19:16.445289 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerDied","Data":"c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36"} Jan 28 12:19:17 crc kubenswrapper[4804]: I0128 12:19:17.454486 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerStarted","Data":"84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e"} Jan 28 12:19:18 crc kubenswrapper[4804]: I0128 12:19:18.483053 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r22km" podStartSLOduration=2.668039816 podStartE2EDuration="5.483035136s" podCreationTimestamp="2026-01-28 12:19:13 +0000 UTC" firstStartedPulling="2026-01-28 12:19:14.42605343 +0000 UTC m=+3430.220933414" lastFinishedPulling="2026-01-28 12:19:17.24104875 +0000 UTC m=+3433.035928734" observedRunningTime="2026-01-28 12:19:18.478008039 +0000 UTC m=+3434.272888023" watchObservedRunningTime="2026-01-28 12:19:18.483035136 +0000 UTC m=+3434.277915120" Jan 28 12:19:23 crc kubenswrapper[4804]: I0128 12:19:23.740242 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:23 crc kubenswrapper[4804]: I0128 12:19:23.740652 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:23 crc kubenswrapper[4804]: I0128 12:19:23.781215 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:24 crc kubenswrapper[4804]: I0128 12:19:24.549349 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:24 crc kubenswrapper[4804]: I0128 12:19:24.594366 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r22km"] Jan 28 12:19:26 crc kubenswrapper[4804]: I0128 12:19:26.521613 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r22km" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="registry-server" containerID="cri-o://84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e" gracePeriod=2 Jan 28 12:19:26 crc kubenswrapper[4804]: I0128 12:19:26.924734 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.082945 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-utilities\") pod \"5d905c97-1bab-4517-885a-c30ce8c59b3c\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.083365 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-catalog-content\") pod \"5d905c97-1bab-4517-885a-c30ce8c59b3c\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.083585 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4thk\" (UniqueName: \"kubernetes.io/projected/5d905c97-1bab-4517-885a-c30ce8c59b3c-kube-api-access-b4thk\") pod \"5d905c97-1bab-4517-885a-c30ce8c59b3c\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.083899 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-utilities" (OuterVolumeSpecName: "utilities") pod "5d905c97-1bab-4517-885a-c30ce8c59b3c" (UID: "5d905c97-1bab-4517-885a-c30ce8c59b3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.084173 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.088631 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d905c97-1bab-4517-885a-c30ce8c59b3c-kube-api-access-b4thk" (OuterVolumeSpecName: "kube-api-access-b4thk") pod "5d905c97-1bab-4517-885a-c30ce8c59b3c" (UID: "5d905c97-1bab-4517-885a-c30ce8c59b3c"). InnerVolumeSpecName "kube-api-access-b4thk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.137246 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d905c97-1bab-4517-885a-c30ce8c59b3c" (UID: "5d905c97-1bab-4517-885a-c30ce8c59b3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.185788 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.185826 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4thk\" (UniqueName: \"kubernetes.io/projected/5d905c97-1bab-4517-885a-c30ce8c59b3c-kube-api-access-b4thk\") on node \"crc\" DevicePath \"\"" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.530102 4804 generic.go:334] "Generic (PLEG): container finished" podID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerID="84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e" exitCode=0 Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.530219 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.530211 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerDied","Data":"84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e"} Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.530814 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerDied","Data":"456da1abdd937ecd19faef71326544de6e697da231fd39d63420c50ca22d3910"} Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.530839 4804 scope.go:117] "RemoveContainer" containerID="84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.554105 4804 scope.go:117] "RemoveContainer" containerID="c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.562483 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r22km"] Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.567960 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r22km"] Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.591497 4804 scope.go:117] "RemoveContainer" containerID="0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.608404 4804 scope.go:117] "RemoveContainer" containerID="84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e" Jan 28 12:19:27 crc kubenswrapper[4804]: E0128 12:19:27.608860 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e\": container with ID starting with 84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e not found: ID does not exist" containerID="84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.608905 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e"} err="failed to get container status \"84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e\": rpc error: code = NotFound desc = could not find container \"84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e\": container with ID starting with 84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e not found: ID does not exist" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.608926 4804 scope.go:117] "RemoveContainer" containerID="c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36" Jan 28 12:19:27 crc kubenswrapper[4804]: E0128 12:19:27.609222 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36\": container with ID starting with c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36 not found: ID does not exist" containerID="c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.609240 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36"} err="failed to get container status \"c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36\": rpc error: code = NotFound desc = could not find container \"c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36\": container with ID starting with c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36 not found: ID does not exist" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.609251 4804 scope.go:117] "RemoveContainer" containerID="0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca" Jan 28 12:19:27 crc kubenswrapper[4804]: E0128 12:19:27.609543 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca\": container with ID starting with 0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca not found: ID does not exist" containerID="0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.609569 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca"} err="failed to get container status \"0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca\": rpc error: code = NotFound desc = could not find container \"0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca\": container with ID starting with 0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca not found: ID does not exist" Jan 28 12:19:28 crc kubenswrapper[4804]: I0128 12:19:28.924561 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" path="/var/lib/kubelet/pods/5d905c97-1bab-4517-885a-c30ce8c59b3c/volumes" Jan 28 12:19:42 crc kubenswrapper[4804]: I0128 12:19:42.582474 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:19:42 crc kubenswrapper[4804]: I0128 12:19:42.583273 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:19:42 crc kubenswrapper[4804]: I0128 12:19:42.583340 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:19:42 crc kubenswrapper[4804]: I0128 12:19:42.584226 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc09c3a58bfeacbb95f858a207ee4e75804e1451287317e8d420ed980a50ed4e"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:19:42 crc kubenswrapper[4804]: I0128 12:19:42.584346 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://bc09c3a58bfeacbb95f858a207ee4e75804e1451287317e8d420ed980a50ed4e" gracePeriod=600 Jan 28 12:19:43 crc kubenswrapper[4804]: I0128 12:19:43.643675 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="bc09c3a58bfeacbb95f858a207ee4e75804e1451287317e8d420ed980a50ed4e" exitCode=0 Jan 28 12:19:43 crc kubenswrapper[4804]: I0128 12:19:43.643717 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"bc09c3a58bfeacbb95f858a207ee4e75804e1451287317e8d420ed980a50ed4e"} Jan 28 12:19:43 crc kubenswrapper[4804]: I0128 12:19:43.644209 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a"} Jan 28 12:19:43 crc kubenswrapper[4804]: I0128 12:19:43.644252 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.577572 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j9s5x"] Jan 28 12:19:51 crc kubenswrapper[4804]: E0128 12:19:51.578442 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="registry-server" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.578456 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="registry-server" Jan 28 12:19:51 crc kubenswrapper[4804]: E0128 12:19:51.578473 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="extract-content" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.578481 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="extract-content" Jan 28 12:19:51 crc kubenswrapper[4804]: E0128 12:19:51.578494 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="extract-utilities" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.578501 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="extract-utilities" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.578696 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="registry-server" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.579869 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.599990 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9s5x"] Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.740637 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-catalog-content\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.740713 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-utilities\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.740861 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bt4l\" (UniqueName: \"kubernetes.io/projected/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-kube-api-access-2bt4l\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.841574 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-utilities\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.841664 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bt4l\" (UniqueName: \"kubernetes.io/projected/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-kube-api-access-2bt4l\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.841717 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-catalog-content\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.842230 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-catalog-content\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.842664 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-utilities\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.866285 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bt4l\" (UniqueName: \"kubernetes.io/projected/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-kube-api-access-2bt4l\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.899636 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:52 crc kubenswrapper[4804]: I0128 12:19:52.351802 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9s5x"] Jan 28 12:19:52 crc kubenswrapper[4804]: W0128 12:19:52.357522 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1e4fe89_15ca_4c38_b6e0_3ebbdc7ce0fa.slice/crio-11eb98c881313ae03eaee5b0f6f9bd50b214d46141ee40f31e239803daa13e06 WatchSource:0}: Error finding container 11eb98c881313ae03eaee5b0f6f9bd50b214d46141ee40f31e239803daa13e06: Status 404 returned error can't find the container with id 11eb98c881313ae03eaee5b0f6f9bd50b214d46141ee40f31e239803daa13e06 Jan 28 12:19:52 crc kubenswrapper[4804]: I0128 12:19:52.711230 4804 generic.go:334] "Generic (PLEG): container finished" podID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerID="e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297" exitCode=0 Jan 28 12:19:52 crc kubenswrapper[4804]: I0128 12:19:52.711275 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9s5x" event={"ID":"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa","Type":"ContainerDied","Data":"e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297"} Jan 28 12:19:52 crc kubenswrapper[4804]: I0128 12:19:52.711303 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9s5x" event={"ID":"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa","Type":"ContainerStarted","Data":"11eb98c881313ae03eaee5b0f6f9bd50b214d46141ee40f31e239803daa13e06"} Jan 28 12:19:53 crc kubenswrapper[4804]: I0128 12:19:53.719989 4804 generic.go:334] "Generic (PLEG): container finished" podID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerID="e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d" exitCode=0 Jan 28 12:19:53 crc kubenswrapper[4804]: I0128 12:19:53.720111 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9s5x" event={"ID":"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa","Type":"ContainerDied","Data":"e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d"} Jan 28 12:19:54 crc kubenswrapper[4804]: I0128 12:19:54.729727 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9s5x" event={"ID":"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa","Type":"ContainerStarted","Data":"b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb"} Jan 28 12:19:54 crc kubenswrapper[4804]: I0128 12:19:54.752946 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j9s5x" podStartSLOduration=2.314967958 podStartE2EDuration="3.752925914s" podCreationTimestamp="2026-01-28 12:19:51 +0000 UTC" firstStartedPulling="2026-01-28 12:19:52.712851792 +0000 UTC m=+3468.507731776" lastFinishedPulling="2026-01-28 12:19:54.150809718 +0000 UTC m=+3469.945689732" observedRunningTime="2026-01-28 12:19:54.75154691 +0000 UTC m=+3470.546426904" watchObservedRunningTime="2026-01-28 12:19:54.752925914 +0000 UTC m=+3470.547805898" Jan 28 12:20:01 crc kubenswrapper[4804]: I0128 12:20:01.900274 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:20:01 crc kubenswrapper[4804]: I0128 12:20:01.901482 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:20:01 crc kubenswrapper[4804]: I0128 12:20:01.945912 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:20:02 crc kubenswrapper[4804]: I0128 12:20:02.821685 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:20:02 crc kubenswrapper[4804]: I0128 12:20:02.870662 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9s5x"] Jan 28 12:20:04 crc kubenswrapper[4804]: I0128 12:20:04.797184 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j9s5x" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="registry-server" containerID="cri-o://b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb" gracePeriod=2 Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.261243 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.332720 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-catalog-content\") pod \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.332865 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bt4l\" (UniqueName: \"kubernetes.io/projected/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-kube-api-access-2bt4l\") pod \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.332910 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-utilities\") pod \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.333774 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-utilities" (OuterVolumeSpecName: "utilities") pod "b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" (UID: "b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.338378 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-kube-api-access-2bt4l" (OuterVolumeSpecName: "kube-api-access-2bt4l") pod "b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" (UID: "b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa"). InnerVolumeSpecName "kube-api-access-2bt4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.359200 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" (UID: "b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.434989 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.435026 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bt4l\" (UniqueName: \"kubernetes.io/projected/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-kube-api-access-2bt4l\") on node \"crc\" DevicePath \"\"" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.435039 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.804278 4804 generic.go:334] "Generic (PLEG): container finished" podID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerID="b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb" exitCode=0 Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.804319 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9s5x" event={"ID":"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa","Type":"ContainerDied","Data":"b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb"} Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.804344 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9s5x" event={"ID":"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa","Type":"ContainerDied","Data":"11eb98c881313ae03eaee5b0f6f9bd50b214d46141ee40f31e239803daa13e06"} Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.804362 4804 scope.go:117] "RemoveContainer" containerID="b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.804459 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.822300 4804 scope.go:117] "RemoveContainer" containerID="e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.836795 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9s5x"] Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.843188 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9s5x"] Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.853814 4804 scope.go:117] "RemoveContainer" containerID="e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.879982 4804 scope.go:117] "RemoveContainer" containerID="b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb" Jan 28 12:20:05 crc kubenswrapper[4804]: E0128 12:20:05.882725 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb\": container with ID starting with b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb not found: ID does not exist" containerID="b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.882778 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb"} err="failed to get container status \"b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb\": rpc error: code = NotFound desc = could not find container \"b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb\": container with ID starting with b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb not found: ID does not exist" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.882806 4804 scope.go:117] "RemoveContainer" containerID="e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d" Jan 28 12:20:05 crc kubenswrapper[4804]: E0128 12:20:05.884139 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d\": container with ID starting with e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d not found: ID does not exist" containerID="e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.884179 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d"} err="failed to get container status \"e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d\": rpc error: code = NotFound desc = could not find container \"e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d\": container with ID starting with e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d not found: ID does not exist" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.884205 4804 scope.go:117] "RemoveContainer" containerID="e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297" Jan 28 12:20:05 crc kubenswrapper[4804]: E0128 12:20:05.884651 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297\": container with ID starting with e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297 not found: ID does not exist" containerID="e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.884691 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297"} err="failed to get container status \"e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297\": rpc error: code = NotFound desc = could not find container \"e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297\": container with ID starting with e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297 not found: ID does not exist" Jan 28 12:20:06 crc kubenswrapper[4804]: I0128 12:20:06.931712 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" path="/var/lib/kubelet/pods/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa/volumes" Jan 28 12:21:42 crc kubenswrapper[4804]: I0128 12:21:42.581843 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:21:42 crc kubenswrapper[4804]: I0128 12:21:42.582394 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:22:12 crc kubenswrapper[4804]: I0128 12:22:12.582347 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:22:12 crc kubenswrapper[4804]: I0128 12:22:12.582809 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.582310 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.582991 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.583058 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.583676 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.583736 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" gracePeriod=600 Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.897262 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" exitCode=0 Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.897321 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a"} Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.897668 4804 scope.go:117] "RemoveContainer" containerID="bc09c3a58bfeacbb95f858a207ee4e75804e1451287317e8d420ed980a50ed4e" Jan 28 12:22:43 crc kubenswrapper[4804]: E0128 12:22:43.300334 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:22:43 crc kubenswrapper[4804]: I0128 12:22:43.904815 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:22:43 crc kubenswrapper[4804]: E0128 12:22:43.905630 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:22:57 crc kubenswrapper[4804]: I0128 12:22:57.915618 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:22:57 crc kubenswrapper[4804]: E0128 12:22:57.916629 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:23:11 crc kubenswrapper[4804]: I0128 12:23:11.914950 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:23:11 crc kubenswrapper[4804]: E0128 12:23:11.915925 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.502658 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k4676"] Jan 28 12:23:16 crc kubenswrapper[4804]: E0128 12:23:16.503325 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="extract-content" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.503337 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="extract-content" Jan 28 12:23:16 crc kubenswrapper[4804]: E0128 12:23:16.503353 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="registry-server" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.503359 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="registry-server" Jan 28 12:23:16 crc kubenswrapper[4804]: E0128 12:23:16.503380 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="extract-utilities" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.503386 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="extract-utilities" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.503546 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="registry-server" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.504424 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.518421 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k4676"] Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.595187 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-utilities\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.595271 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-catalog-content\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.595366 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dggqg\" (UniqueName: \"kubernetes.io/projected/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-kube-api-access-dggqg\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.696440 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dggqg\" (UniqueName: \"kubernetes.io/projected/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-kube-api-access-dggqg\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.696544 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-utilities\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.696596 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-catalog-content\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.697303 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-utilities\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.697382 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-catalog-content\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.717725 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dggqg\" (UniqueName: \"kubernetes.io/projected/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-kube-api-access-dggqg\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.826152 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:17 crc kubenswrapper[4804]: I0128 12:23:17.293472 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k4676"] Jan 28 12:23:17 crc kubenswrapper[4804]: I0128 12:23:17.356062 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerStarted","Data":"4b15619cc1b1a276b7a17289d167063b4c95da5e04237997b496ec98be8c4e08"} Jan 28 12:23:18 crc kubenswrapper[4804]: I0128 12:23:18.362816 4804 generic.go:334] "Generic (PLEG): container finished" podID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerID="00a189a86c1cd24cb7257f23ccf5d63af9671102f4a6b5059dc3253a4b8e8955" exitCode=0 Jan 28 12:23:18 crc kubenswrapper[4804]: I0128 12:23:18.362919 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerDied","Data":"00a189a86c1cd24cb7257f23ccf5d63af9671102f4a6b5059dc3253a4b8e8955"} Jan 28 12:23:18 crc kubenswrapper[4804]: I0128 12:23:18.364483 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 12:23:20 crc kubenswrapper[4804]: I0128 12:23:20.381543 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerStarted","Data":"02c4d0b8966475afdf876a5f943d19a27ab1e56b368cae957aa5b97c99887ffc"} Jan 28 12:23:21 crc kubenswrapper[4804]: I0128 12:23:21.391811 4804 generic.go:334] "Generic (PLEG): container finished" podID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerID="02c4d0b8966475afdf876a5f943d19a27ab1e56b368cae957aa5b97c99887ffc" exitCode=0 Jan 28 12:23:21 crc kubenswrapper[4804]: I0128 12:23:21.391860 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerDied","Data":"02c4d0b8966475afdf876a5f943d19a27ab1e56b368cae957aa5b97c99887ffc"} Jan 28 12:23:22 crc kubenswrapper[4804]: I0128 12:23:22.400768 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerStarted","Data":"0e5e98052fb832d0ad8813c0a5cca76e9b801dd58e6e870b461c2d96f2c0a2d7"} Jan 28 12:23:22 crc kubenswrapper[4804]: I0128 12:23:22.427272 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k4676" podStartSLOduration=3.023284493 podStartE2EDuration="6.427256061s" podCreationTimestamp="2026-01-28 12:23:16 +0000 UTC" firstStartedPulling="2026-01-28 12:23:18.364186849 +0000 UTC m=+3674.159066833" lastFinishedPulling="2026-01-28 12:23:21.768158417 +0000 UTC m=+3677.563038401" observedRunningTime="2026-01-28 12:23:22.42181246 +0000 UTC m=+3678.216692444" watchObservedRunningTime="2026-01-28 12:23:22.427256061 +0000 UTC m=+3678.222136045" Jan 28 12:23:26 crc kubenswrapper[4804]: I0128 12:23:26.826667 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:26 crc kubenswrapper[4804]: I0128 12:23:26.827304 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:26 crc kubenswrapper[4804]: I0128 12:23:26.915295 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:23:26 crc kubenswrapper[4804]: E0128 12:23:26.915621 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:23:27 crc kubenswrapper[4804]: I0128 12:23:27.887519 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4676" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="registry-server" probeResult="failure" output=< Jan 28 12:23:27 crc kubenswrapper[4804]: timeout: failed to connect service ":50051" within 1s Jan 28 12:23:27 crc kubenswrapper[4804]: > Jan 28 12:23:36 crc kubenswrapper[4804]: I0128 12:23:36.869110 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:36 crc kubenswrapper[4804]: I0128 12:23:36.922339 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:41 crc kubenswrapper[4804]: I0128 12:23:41.228012 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k4676"] Jan 28 12:23:41 crc kubenswrapper[4804]: I0128 12:23:41.230015 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k4676" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="registry-server" containerID="cri-o://0e5e98052fb832d0ad8813c0a5cca76e9b801dd58e6e870b461c2d96f2c0a2d7" gracePeriod=2 Jan 28 12:23:41 crc kubenswrapper[4804]: I0128 12:23:41.541104 4804 generic.go:334] "Generic (PLEG): container finished" podID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerID="0e5e98052fb832d0ad8813c0a5cca76e9b801dd58e6e870b461c2d96f2c0a2d7" exitCode=0 Jan 28 12:23:41 crc kubenswrapper[4804]: I0128 12:23:41.541178 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerDied","Data":"0e5e98052fb832d0ad8813c0a5cca76e9b801dd58e6e870b461c2d96f2c0a2d7"} Jan 28 12:23:41 crc kubenswrapper[4804]: I0128 12:23:41.915026 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:23:41 crc kubenswrapper[4804]: E0128 12:23:41.915273 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.202897 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.271645 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-catalog-content\") pod \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.271709 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dggqg\" (UniqueName: \"kubernetes.io/projected/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-kube-api-access-dggqg\") pod \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.271781 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-utilities\") pod \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.272795 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-utilities" (OuterVolumeSpecName: "utilities") pod "e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" (UID: "e2e3d0a3-fa19-4faf-b90a-85c7fb91266e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.277343 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-kube-api-access-dggqg" (OuterVolumeSpecName: "kube-api-access-dggqg") pod "e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" (UID: "e2e3d0a3-fa19-4faf-b90a-85c7fb91266e"). InnerVolumeSpecName "kube-api-access-dggqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.374714 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.374760 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dggqg\" (UniqueName: \"kubernetes.io/projected/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-kube-api-access-dggqg\") on node \"crc\" DevicePath \"\"" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.398115 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" (UID: "e2e3d0a3-fa19-4faf-b90a-85c7fb91266e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.476806 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.552545 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerDied","Data":"4b15619cc1b1a276b7a17289d167063b4c95da5e04237997b496ec98be8c4e08"} Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.552611 4804 scope.go:117] "RemoveContainer" containerID="0e5e98052fb832d0ad8813c0a5cca76e9b801dd58e6e870b461c2d96f2c0a2d7" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.552657 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.569753 4804 scope.go:117] "RemoveContainer" containerID="02c4d0b8966475afdf876a5f943d19a27ab1e56b368cae957aa5b97c99887ffc" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.585835 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k4676"] Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.593401 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k4676"] Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.618308 4804 scope.go:117] "RemoveContainer" containerID="00a189a86c1cd24cb7257f23ccf5d63af9671102f4a6b5059dc3253a4b8e8955" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.925498 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" path="/var/lib/kubelet/pods/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e/volumes" Jan 28 12:23:56 crc kubenswrapper[4804]: I0128 12:23:56.915122 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:23:56 crc kubenswrapper[4804]: E0128 12:23:56.916601 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:24:07 crc kubenswrapper[4804]: I0128 12:24:07.915044 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:24:07 crc kubenswrapper[4804]: E0128 12:24:07.915780 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:24:21 crc kubenswrapper[4804]: I0128 12:24:21.909866 4804 scope.go:117] "RemoveContainer" containerID="2efbc03490ed43572699ec444996e43e335aa0f68aab150fa2a1ae8f5fa13a00" Jan 28 12:24:21 crc kubenswrapper[4804]: I0128 12:24:21.915228 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:24:21 crc kubenswrapper[4804]: E0128 12:24:21.915455 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:24:21 crc kubenswrapper[4804]: I0128 12:24:21.935462 4804 scope.go:117] "RemoveContainer" containerID="d39d3f3d9e734119d1731a4b66194da17cd232e9e7b4df4c0d4594356663df0d" Jan 28 12:24:21 crc kubenswrapper[4804]: I0128 12:24:21.962442 4804 scope.go:117] "RemoveContainer" containerID="d6a3a8de520aca058f2d55e0c6ccaa8d7dbb2775150ff351e3a1e5c747073409" Jan 28 12:24:35 crc kubenswrapper[4804]: I0128 12:24:35.915097 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:24:35 crc kubenswrapper[4804]: E0128 12:24:35.917139 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:24:47 crc kubenswrapper[4804]: I0128 12:24:47.915412 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:24:47 crc kubenswrapper[4804]: E0128 12:24:47.916183 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:24:58 crc kubenswrapper[4804]: I0128 12:24:58.915447 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:24:58 crc kubenswrapper[4804]: E0128 12:24:58.916082 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:25:13 crc kubenswrapper[4804]: I0128 12:25:13.915427 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:25:13 crc kubenswrapper[4804]: E0128 12:25:13.916540 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:25:25 crc kubenswrapper[4804]: I0128 12:25:25.915339 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:25:25 crc kubenswrapper[4804]: E0128 12:25:25.916295 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:25:36 crc kubenswrapper[4804]: I0128 12:25:36.915378 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:25:36 crc kubenswrapper[4804]: E0128 12:25:36.918391 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:25:49 crc kubenswrapper[4804]: I0128 12:25:49.916162 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:25:49 crc kubenswrapper[4804]: E0128 12:25:49.917448 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:26:01 crc kubenswrapper[4804]: I0128 12:26:01.915248 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:26:01 crc kubenswrapper[4804]: E0128 12:26:01.916030 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:26:14 crc kubenswrapper[4804]: I0128 12:26:14.919016 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:26:14 crc kubenswrapper[4804]: E0128 12:26:14.919739 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:26:25 crc kubenswrapper[4804]: I0128 12:26:25.915467 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:26:25 crc kubenswrapper[4804]: E0128 12:26:25.916484 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:26:39 crc kubenswrapper[4804]: I0128 12:26:39.915735 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:26:39 crc kubenswrapper[4804]: E0128 12:26:39.919435 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:26:51 crc kubenswrapper[4804]: I0128 12:26:51.915212 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:26:51 crc kubenswrapper[4804]: E0128 12:26:51.916272 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:27:05 crc kubenswrapper[4804]: I0128 12:27:05.915640 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:27:05 crc kubenswrapper[4804]: E0128 12:27:05.916359 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:27:19 crc kubenswrapper[4804]: I0128 12:27:19.915082 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:27:19 crc kubenswrapper[4804]: E0128 12:27:19.915797 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:27:32 crc kubenswrapper[4804]: I0128 12:27:32.914990 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:27:32 crc kubenswrapper[4804]: E0128 12:27:32.916964 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:27:47 crc kubenswrapper[4804]: I0128 12:27:47.915512 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:27:48 crc kubenswrapper[4804]: I0128 12:27:48.386139 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"2f722375e5ff5e739f4fa1631080addfc6120b6c84d42b00241ff639a1e25177"} Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.884803 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j2mnv"] Jan 28 12:28:44 crc kubenswrapper[4804]: E0128 12:28:44.886843 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="registry-server" Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.893149 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="registry-server" Jan 28 12:28:44 crc kubenswrapper[4804]: E0128 12:28:44.893330 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="extract-utilities" Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.893410 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="extract-utilities" Jan 28 12:28:44 crc kubenswrapper[4804]: E0128 12:28:44.893493 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="extract-content" Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.893566 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="extract-content" Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.893946 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="registry-server" Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.895204 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.895438 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2mnv"] Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.086818 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-utilities\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.086919 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzjb6\" (UniqueName: \"kubernetes.io/projected/98806e2d-b65e-409f-b942-e8c1d833c27b-kube-api-access-tzjb6\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.087123 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-catalog-content\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.187929 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-utilities\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.187972 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzjb6\" (UniqueName: \"kubernetes.io/projected/98806e2d-b65e-409f-b942-e8c1d833c27b-kube-api-access-tzjb6\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.188021 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-catalog-content\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.188504 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-catalog-content\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.188747 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-utilities\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.209387 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzjb6\" (UniqueName: \"kubernetes.io/projected/98806e2d-b65e-409f-b942-e8c1d833c27b-kube-api-access-tzjb6\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.223930 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.712285 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2mnv"] Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.777809 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2mnv" event={"ID":"98806e2d-b65e-409f-b942-e8c1d833c27b","Type":"ContainerStarted","Data":"dd209b0c3b982a568dd0db1a3f08e9013293b854458609a8faa07aeb543cea0a"} Jan 28 12:28:46 crc kubenswrapper[4804]: I0128 12:28:46.784447 4804 generic.go:334] "Generic (PLEG): container finished" podID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerID="c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4" exitCode=0 Jan 28 12:28:46 crc kubenswrapper[4804]: I0128 12:28:46.784647 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2mnv" event={"ID":"98806e2d-b65e-409f-b942-e8c1d833c27b","Type":"ContainerDied","Data":"c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4"} Jan 28 12:28:46 crc kubenswrapper[4804]: I0128 12:28:46.786511 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 12:28:48 crc kubenswrapper[4804]: I0128 12:28:48.799173 4804 generic.go:334] "Generic (PLEG): container finished" podID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerID="1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5" exitCode=0 Jan 28 12:28:48 crc kubenswrapper[4804]: I0128 12:28:48.799213 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2mnv" event={"ID":"98806e2d-b65e-409f-b942-e8c1d833c27b","Type":"ContainerDied","Data":"1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5"} Jan 28 12:28:51 crc kubenswrapper[4804]: I0128 12:28:51.820731 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2mnv" event={"ID":"98806e2d-b65e-409f-b942-e8c1d833c27b","Type":"ContainerStarted","Data":"97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b"} Jan 28 12:28:51 crc kubenswrapper[4804]: I0128 12:28:51.847962 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j2mnv" podStartSLOduration=3.961645325 podStartE2EDuration="7.847928404s" podCreationTimestamp="2026-01-28 12:28:44 +0000 UTC" firstStartedPulling="2026-01-28 12:28:46.786310285 +0000 UTC m=+4002.581190269" lastFinishedPulling="2026-01-28 12:28:50.672593324 +0000 UTC m=+4006.467473348" observedRunningTime="2026-01-28 12:28:51.839075767 +0000 UTC m=+4007.633955751" watchObservedRunningTime="2026-01-28 12:28:51.847928404 +0000 UTC m=+4007.642808388" Jan 28 12:28:55 crc kubenswrapper[4804]: I0128 12:28:55.225108 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:55 crc kubenswrapper[4804]: I0128 12:28:55.225455 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:55 crc kubenswrapper[4804]: I0128 12:28:55.275068 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:56 crc kubenswrapper[4804]: I0128 12:28:56.305415 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:56 crc kubenswrapper[4804]: I0128 12:28:56.358677 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2mnv"] Jan 28 12:28:57 crc kubenswrapper[4804]: I0128 12:28:57.856813 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j2mnv" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="registry-server" containerID="cri-o://97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b" gracePeriod=2 Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.244129 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.357130 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-catalog-content\") pod \"98806e2d-b65e-409f-b942-e8c1d833c27b\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.357196 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzjb6\" (UniqueName: \"kubernetes.io/projected/98806e2d-b65e-409f-b942-e8c1d833c27b-kube-api-access-tzjb6\") pod \"98806e2d-b65e-409f-b942-e8c1d833c27b\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.357231 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-utilities\") pod \"98806e2d-b65e-409f-b942-e8c1d833c27b\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.358592 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-utilities" (OuterVolumeSpecName: "utilities") pod "98806e2d-b65e-409f-b942-e8c1d833c27b" (UID: "98806e2d-b65e-409f-b942-e8c1d833c27b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.369989 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98806e2d-b65e-409f-b942-e8c1d833c27b-kube-api-access-tzjb6" (OuterVolumeSpecName: "kube-api-access-tzjb6") pod "98806e2d-b65e-409f-b942-e8c1d833c27b" (UID: "98806e2d-b65e-409f-b942-e8c1d833c27b"). InnerVolumeSpecName "kube-api-access-tzjb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.411937 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98806e2d-b65e-409f-b942-e8c1d833c27b" (UID: "98806e2d-b65e-409f-b942-e8c1d833c27b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.458685 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.458723 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzjb6\" (UniqueName: \"kubernetes.io/projected/98806e2d-b65e-409f-b942-e8c1d833c27b-kube-api-access-tzjb6\") on node \"crc\" DevicePath \"\"" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.458735 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.863960 4804 generic.go:334] "Generic (PLEG): container finished" podID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerID="97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b" exitCode=0 Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.864014 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2mnv" event={"ID":"98806e2d-b65e-409f-b942-e8c1d833c27b","Type":"ContainerDied","Data":"97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b"} Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.864045 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.864076 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2mnv" event={"ID":"98806e2d-b65e-409f-b942-e8c1d833c27b","Type":"ContainerDied","Data":"dd209b0c3b982a568dd0db1a3f08e9013293b854458609a8faa07aeb543cea0a"} Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.864097 4804 scope.go:117] "RemoveContainer" containerID="97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.888599 4804 scope.go:117] "RemoveContainer" containerID="1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.900782 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2mnv"] Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.907830 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j2mnv"] Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.917489 4804 scope.go:117] "RemoveContainer" containerID="c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.926567 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" path="/var/lib/kubelet/pods/98806e2d-b65e-409f-b942-e8c1d833c27b/volumes" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.936343 4804 scope.go:117] "RemoveContainer" containerID="97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b" Jan 28 12:28:58 crc kubenswrapper[4804]: E0128 12:28:58.936924 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b\": container with ID starting with 97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b not found: ID does not exist" containerID="97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.936975 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b"} err="failed to get container status \"97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b\": rpc error: code = NotFound desc = could not find container \"97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b\": container with ID starting with 97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b not found: ID does not exist" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.937029 4804 scope.go:117] "RemoveContainer" containerID="1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5" Jan 28 12:28:58 crc kubenswrapper[4804]: E0128 12:28:58.937645 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5\": container with ID starting with 1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5 not found: ID does not exist" containerID="1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.937697 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5"} err="failed to get container status \"1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5\": rpc error: code = NotFound desc = could not find container \"1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5\": container with ID starting with 1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5 not found: ID does not exist" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.937732 4804 scope.go:117] "RemoveContainer" containerID="c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4" Jan 28 12:28:58 crc kubenswrapper[4804]: E0128 12:28:58.938214 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4\": container with ID starting with c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4 not found: ID does not exist" containerID="c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.938265 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4"} err="failed to get container status \"c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4\": rpc error: code = NotFound desc = could not find container \"c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4\": container with ID starting with c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4 not found: ID does not exist" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.174733 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk"] Jan 28 12:30:00 crc kubenswrapper[4804]: E0128 12:30:00.175714 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="extract-utilities" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.175731 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="extract-utilities" Jan 28 12:30:00 crc kubenswrapper[4804]: E0128 12:30:00.175746 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="extract-content" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.175754 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="extract-content" Jan 28 12:30:00 crc kubenswrapper[4804]: E0128 12:30:00.175771 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="registry-server" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.175778 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="registry-server" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.175988 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="registry-server" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.176722 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.179076 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.179314 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.185451 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk"] Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.313867 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxtzk\" (UniqueName: \"kubernetes.io/projected/f090523c-e035-4be4-8124-2946e5bbe8a3-kube-api-access-cxtzk\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.314248 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f090523c-e035-4be4-8124-2946e5bbe8a3-secret-volume\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.314329 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f090523c-e035-4be4-8124-2946e5bbe8a3-config-volume\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.415327 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f090523c-e035-4be4-8124-2946e5bbe8a3-config-volume\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.415408 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxtzk\" (UniqueName: \"kubernetes.io/projected/f090523c-e035-4be4-8124-2946e5bbe8a3-kube-api-access-cxtzk\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.415434 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f090523c-e035-4be4-8124-2946e5bbe8a3-secret-volume\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.416848 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f090523c-e035-4be4-8124-2946e5bbe8a3-config-volume\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.421074 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f090523c-e035-4be4-8124-2946e5bbe8a3-secret-volume\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.433378 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxtzk\" (UniqueName: \"kubernetes.io/projected/f090523c-e035-4be4-8124-2946e5bbe8a3-kube-api-access-cxtzk\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.518028 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.928962 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk"] Jan 28 12:30:01 crc kubenswrapper[4804]: I0128 12:30:01.284476 4804 generic.go:334] "Generic (PLEG): container finished" podID="f090523c-e035-4be4-8124-2946e5bbe8a3" containerID="e8ea830423bf5163ac427bed1acd7b672005786797bb0ad27e9106f80ca5a96e" exitCode=0 Jan 28 12:30:01 crc kubenswrapper[4804]: I0128 12:30:01.284537 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" event={"ID":"f090523c-e035-4be4-8124-2946e5bbe8a3","Type":"ContainerDied","Data":"e8ea830423bf5163ac427bed1acd7b672005786797bb0ad27e9106f80ca5a96e"} Jan 28 12:30:01 crc kubenswrapper[4804]: I0128 12:30:01.284649 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" event={"ID":"f090523c-e035-4be4-8124-2946e5bbe8a3","Type":"ContainerStarted","Data":"9d3b8e35f948624a0e54e6fa86a9b91cc379ce5ca88590bc129ad97ef7655dc5"} Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.584768 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.645638 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f090523c-e035-4be4-8124-2946e5bbe8a3-config-volume\") pod \"f090523c-e035-4be4-8124-2946e5bbe8a3\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.645742 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxtzk\" (UniqueName: \"kubernetes.io/projected/f090523c-e035-4be4-8124-2946e5bbe8a3-kube-api-access-cxtzk\") pod \"f090523c-e035-4be4-8124-2946e5bbe8a3\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.645769 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f090523c-e035-4be4-8124-2946e5bbe8a3-secret-volume\") pod \"f090523c-e035-4be4-8124-2946e5bbe8a3\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.646416 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f090523c-e035-4be4-8124-2946e5bbe8a3-config-volume" (OuterVolumeSpecName: "config-volume") pod "f090523c-e035-4be4-8124-2946e5bbe8a3" (UID: "f090523c-e035-4be4-8124-2946e5bbe8a3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.652789 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f090523c-e035-4be4-8124-2946e5bbe8a3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f090523c-e035-4be4-8124-2946e5bbe8a3" (UID: "f090523c-e035-4be4-8124-2946e5bbe8a3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.653780 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f090523c-e035-4be4-8124-2946e5bbe8a3-kube-api-access-cxtzk" (OuterVolumeSpecName: "kube-api-access-cxtzk") pod "f090523c-e035-4be4-8124-2946e5bbe8a3" (UID: "f090523c-e035-4be4-8124-2946e5bbe8a3"). InnerVolumeSpecName "kube-api-access-cxtzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.747628 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f090523c-e035-4be4-8124-2946e5bbe8a3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.748077 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxtzk\" (UniqueName: \"kubernetes.io/projected/f090523c-e035-4be4-8124-2946e5bbe8a3-kube-api-access-cxtzk\") on node \"crc\" DevicePath \"\"" Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.748094 4804 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f090523c-e035-4be4-8124-2946e5bbe8a3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 12:30:03 crc kubenswrapper[4804]: I0128 12:30:03.300291 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" event={"ID":"f090523c-e035-4be4-8124-2946e5bbe8a3","Type":"ContainerDied","Data":"9d3b8e35f948624a0e54e6fa86a9b91cc379ce5ca88590bc129ad97ef7655dc5"} Jan 28 12:30:03 crc kubenswrapper[4804]: I0128 12:30:03.300329 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d3b8e35f948624a0e54e6fa86a9b91cc379ce5ca88590bc129ad97ef7655dc5" Jan 28 12:30:03 crc kubenswrapper[4804]: I0128 12:30:03.300406 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:03 crc kubenswrapper[4804]: I0128 12:30:03.675362 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr"] Jan 28 12:30:03 crc kubenswrapper[4804]: I0128 12:30:03.680100 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr"] Jan 28 12:30:04 crc kubenswrapper[4804]: I0128 12:30:04.922398 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" path="/var/lib/kubelet/pods/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc/volumes" Jan 28 12:30:12 crc kubenswrapper[4804]: I0128 12:30:12.582190 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:30:12 crc kubenswrapper[4804]: I0128 12:30:12.583402 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:30:22 crc kubenswrapper[4804]: I0128 12:30:22.100713 4804 scope.go:117] "RemoveContainer" containerID="ec4494c033a2934fc01293e9dd81cb1af39c7d20a6e53ef7ee0ed4ef65497625" Jan 28 12:30:31 crc kubenswrapper[4804]: I0128 12:30:31.982041 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tl2sb"] Jan 28 12:30:31 crc kubenswrapper[4804]: E0128 12:30:31.982993 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f090523c-e035-4be4-8124-2946e5bbe8a3" containerName="collect-profiles" Jan 28 12:30:31 crc kubenswrapper[4804]: I0128 12:30:31.983008 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f090523c-e035-4be4-8124-2946e5bbe8a3" containerName="collect-profiles" Jan 28 12:30:31 crc kubenswrapper[4804]: I0128 12:30:31.983182 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f090523c-e035-4be4-8124-2946e5bbe8a3" containerName="collect-profiles" Jan 28 12:30:31 crc kubenswrapper[4804]: I0128 12:30:31.984353 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:31 crc kubenswrapper[4804]: I0128 12:30:31.995666 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl2sb"] Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.064640 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-utilities\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.064784 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwrzf\" (UniqueName: \"kubernetes.io/projected/d1fb8773-4961-41e7-9111-b828c5e51c99-kube-api-access-jwrzf\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.064823 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-catalog-content\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.165932 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwrzf\" (UniqueName: \"kubernetes.io/projected/d1fb8773-4961-41e7-9111-b828c5e51c99-kube-api-access-jwrzf\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.165987 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-catalog-content\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.166034 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-utilities\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.166580 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-utilities\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.166629 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-catalog-content\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.188862 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwrzf\" (UniqueName: \"kubernetes.io/projected/d1fb8773-4961-41e7-9111-b828c5e51c99-kube-api-access-jwrzf\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.307549 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.722611 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl2sb"] Jan 28 12:30:33 crc kubenswrapper[4804]: I0128 12:30:33.532147 4804 generic.go:334] "Generic (PLEG): container finished" podID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerID="6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612" exitCode=0 Jan 28 12:30:33 crc kubenswrapper[4804]: I0128 12:30:33.532212 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerDied","Data":"6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612"} Jan 28 12:30:33 crc kubenswrapper[4804]: I0128 12:30:33.532246 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerStarted","Data":"139cec1f5b9901b8ffc89b73806b83f5779d5e9653677ae4096b5740d84e1e64"} Jan 28 12:30:34 crc kubenswrapper[4804]: I0128 12:30:34.543336 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerStarted","Data":"288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d"} Jan 28 12:30:35 crc kubenswrapper[4804]: I0128 12:30:35.550625 4804 generic.go:334] "Generic (PLEG): container finished" podID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerID="288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d" exitCode=0 Jan 28 12:30:35 crc kubenswrapper[4804]: I0128 12:30:35.550953 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerDied","Data":"288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d"} Jan 28 12:30:36 crc kubenswrapper[4804]: I0128 12:30:36.559479 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerStarted","Data":"32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd"} Jan 28 12:30:36 crc kubenswrapper[4804]: I0128 12:30:36.577661 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tl2sb" podStartSLOduration=3.156965048 podStartE2EDuration="5.577647137s" podCreationTimestamp="2026-01-28 12:30:31 +0000 UTC" firstStartedPulling="2026-01-28 12:30:33.53383634 +0000 UTC m=+4109.328716324" lastFinishedPulling="2026-01-28 12:30:35.954518439 +0000 UTC m=+4111.749398413" observedRunningTime="2026-01-28 12:30:36.575200481 +0000 UTC m=+4112.370080465" watchObservedRunningTime="2026-01-28 12:30:36.577647137 +0000 UTC m=+4112.372527121" Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.308050 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.308653 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.363208 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.582223 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.582288 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.648392 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.689822 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl2sb"] Jan 28 12:30:44 crc kubenswrapper[4804]: I0128 12:30:44.619365 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tl2sb" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="registry-server" containerID="cri-o://32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd" gracePeriod=2 Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.309874 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.339385 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-catalog-content\") pod \"d1fb8773-4961-41e7-9111-b828c5e51c99\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.339436 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwrzf\" (UniqueName: \"kubernetes.io/projected/d1fb8773-4961-41e7-9111-b828c5e51c99-kube-api-access-jwrzf\") pod \"d1fb8773-4961-41e7-9111-b828c5e51c99\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.339557 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-utilities\") pod \"d1fb8773-4961-41e7-9111-b828c5e51c99\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.340706 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-utilities" (OuterVolumeSpecName: "utilities") pod "d1fb8773-4961-41e7-9111-b828c5e51c99" (UID: "d1fb8773-4961-41e7-9111-b828c5e51c99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.350142 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1fb8773-4961-41e7-9111-b828c5e51c99-kube-api-access-jwrzf" (OuterVolumeSpecName: "kube-api-access-jwrzf") pod "d1fb8773-4961-41e7-9111-b828c5e51c99" (UID: "d1fb8773-4961-41e7-9111-b828c5e51c99"). InnerVolumeSpecName "kube-api-access-jwrzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.441095 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.441131 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwrzf\" (UniqueName: \"kubernetes.io/projected/d1fb8773-4961-41e7-9111-b828c5e51c99-kube-api-access-jwrzf\") on node \"crc\" DevicePath \"\"" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.627118 4804 generic.go:334] "Generic (PLEG): container finished" podID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerID="32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd" exitCode=0 Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.627166 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerDied","Data":"32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd"} Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.627204 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerDied","Data":"139cec1f5b9901b8ffc89b73806b83f5779d5e9653677ae4096b5740d84e1e64"} Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.627227 4804 scope.go:117] "RemoveContainer" containerID="32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.627237 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.653227 4804 scope.go:117] "RemoveContainer" containerID="288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.738751 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1fb8773-4961-41e7-9111-b828c5e51c99" (UID: "d1fb8773-4961-41e7-9111-b828c5e51c99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.746044 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.754618 4804 scope.go:117] "RemoveContainer" containerID="6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.770938 4804 scope.go:117] "RemoveContainer" containerID="32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd" Jan 28 12:30:45 crc kubenswrapper[4804]: E0128 12:30:45.771283 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd\": container with ID starting with 32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd not found: ID does not exist" containerID="32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.771312 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd"} err="failed to get container status \"32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd\": rpc error: code = NotFound desc = could not find container \"32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd\": container with ID starting with 32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd not found: ID does not exist" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.771332 4804 scope.go:117] "RemoveContainer" containerID="288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d" Jan 28 12:30:45 crc kubenswrapper[4804]: E0128 12:30:45.771581 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d\": container with ID starting with 288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d not found: ID does not exist" containerID="288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.771605 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d"} err="failed to get container status \"288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d\": rpc error: code = NotFound desc = could not find container \"288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d\": container with ID starting with 288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d not found: ID does not exist" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.771620 4804 scope.go:117] "RemoveContainer" containerID="6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612" Jan 28 12:30:45 crc kubenswrapper[4804]: E0128 12:30:45.771858 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612\": container with ID starting with 6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612 not found: ID does not exist" containerID="6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.771894 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612"} err="failed to get container status \"6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612\": rpc error: code = NotFound desc = could not find container \"6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612\": container with ID starting with 6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612 not found: ID does not exist" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.960274 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl2sb"] Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.970367 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl2sb"] Jan 28 12:30:46 crc kubenswrapper[4804]: I0128 12:30:46.939530 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" path="/var/lib/kubelet/pods/d1fb8773-4961-41e7-9111-b828c5e51c99/volumes" Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.582374 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.583585 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.583685 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.584817 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f722375e5ff5e739f4fa1631080addfc6120b6c84d42b00241ff639a1e25177"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.584919 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://2f722375e5ff5e739f4fa1631080addfc6120b6c84d42b00241ff639a1e25177" gracePeriod=600 Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.809059 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="2f722375e5ff5e739f4fa1631080addfc6120b6c84d42b00241ff639a1e25177" exitCode=0 Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.809244 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"2f722375e5ff5e739f4fa1631080addfc6120b6c84d42b00241ff639a1e25177"} Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.809276 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:31:13 crc kubenswrapper[4804]: I0128 12:31:13.821327 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355"} Jan 28 12:33:12 crc kubenswrapper[4804]: I0128 12:33:12.582511 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:33:12 crc kubenswrapper[4804]: I0128 12:33:12.583120 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:33:42 crc kubenswrapper[4804]: I0128 12:33:42.582346 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:33:42 crc kubenswrapper[4804]: I0128 12:33:42.582908 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:34:12 crc kubenswrapper[4804]: I0128 12:34:12.582360 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:34:12 crc kubenswrapper[4804]: I0128 12:34:12.582984 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:34:12 crc kubenswrapper[4804]: I0128 12:34:12.583030 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:34:12 crc kubenswrapper[4804]: I0128 12:34:12.583584 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:34:12 crc kubenswrapper[4804]: I0128 12:34:12.583639 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" gracePeriod=600 Jan 28 12:34:13 crc kubenswrapper[4804]: E0128 12:34:13.245362 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:34:13 crc kubenswrapper[4804]: I0128 12:34:13.404248 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" exitCode=0 Jan 28 12:34:13 crc kubenswrapper[4804]: I0128 12:34:13.404314 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355"} Jan 28 12:34:13 crc kubenswrapper[4804]: I0128 12:34:13.404365 4804 scope.go:117] "RemoveContainer" containerID="2f722375e5ff5e739f4fa1631080addfc6120b6c84d42b00241ff639a1e25177" Jan 28 12:34:13 crc kubenswrapper[4804]: I0128 12:34:13.405358 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:34:13 crc kubenswrapper[4804]: E0128 12:34:13.405817 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:34:25 crc kubenswrapper[4804]: I0128 12:34:25.914934 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:34:25 crc kubenswrapper[4804]: E0128 12:34:25.915625 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:34:40 crc kubenswrapper[4804]: I0128 12:34:40.916264 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:34:40 crc kubenswrapper[4804]: E0128 12:34:40.918567 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:34:52 crc kubenswrapper[4804]: I0128 12:34:52.914668 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:34:52 crc kubenswrapper[4804]: E0128 12:34:52.915350 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:35:04 crc kubenswrapper[4804]: I0128 12:35:04.923122 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:35:04 crc kubenswrapper[4804]: E0128 12:35:04.924214 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:35:19 crc kubenswrapper[4804]: I0128 12:35:19.915355 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:35:19 crc kubenswrapper[4804]: E0128 12:35:19.916466 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:35:34 crc kubenswrapper[4804]: I0128 12:35:34.921021 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:35:34 crc kubenswrapper[4804]: E0128 12:35:34.921845 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:35:49 crc kubenswrapper[4804]: I0128 12:35:49.915068 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:35:49 crc kubenswrapper[4804]: E0128 12:35:49.915851 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:36:04 crc kubenswrapper[4804]: I0128 12:36:04.919391 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:36:04 crc kubenswrapper[4804]: E0128 12:36:04.920310 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:36:19 crc kubenswrapper[4804]: I0128 12:36:19.915099 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:36:19 crc kubenswrapper[4804]: E0128 12:36:19.915907 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:36:31 crc kubenswrapper[4804]: I0128 12:36:31.915699 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:36:31 crc kubenswrapper[4804]: E0128 12:36:31.916512 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:36:42 crc kubenswrapper[4804]: I0128 12:36:42.915532 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:36:42 crc kubenswrapper[4804]: E0128 12:36:42.917101 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:36:56 crc kubenswrapper[4804]: I0128 12:36:56.915310 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:36:56 crc kubenswrapper[4804]: E0128 12:36:56.916185 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.529905 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hmsxq"] Jan 28 12:37:08 crc kubenswrapper[4804]: E0128 12:37:08.530866 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="extract-content" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.530899 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="extract-content" Jan 28 12:37:08 crc kubenswrapper[4804]: E0128 12:37:08.530923 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="registry-server" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.530930 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="registry-server" Jan 28 12:37:08 crc kubenswrapper[4804]: E0128 12:37:08.530952 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="extract-utilities" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.530962 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="extract-utilities" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.531125 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="registry-server" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.532372 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.546234 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmsxq"] Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.599705 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh9r9\" (UniqueName: \"kubernetes.io/projected/d6565976-3a91-4cc5-9fb6-e564382fdf6e-kube-api-access-xh9r9\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.599749 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-utilities\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.599797 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-catalog-content\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.700982 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh9r9\" (UniqueName: \"kubernetes.io/projected/d6565976-3a91-4cc5-9fb6-e564382fdf6e-kube-api-access-xh9r9\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.701031 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-utilities\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.701099 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-catalog-content\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.701652 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-catalog-content\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.701709 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-utilities\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.721716 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh9r9\" (UniqueName: \"kubernetes.io/projected/d6565976-3a91-4cc5-9fb6-e564382fdf6e-kube-api-access-xh9r9\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.848811 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.285631 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmsxq"] Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.532097 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-46g75"] Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.534160 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.548250 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46g75"] Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.714216 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvlp2\" (UniqueName: \"kubernetes.io/projected/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-kube-api-access-lvlp2\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.714271 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-utilities\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.714404 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-catalog-content\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.808088 4804 generic.go:334] "Generic (PLEG): container finished" podID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerID="60fe76a65de41cce8c367c6ffab4aa6f356b514a9d3158b59aab700a311236f8" exitCode=0 Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.808132 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerDied","Data":"60fe76a65de41cce8c367c6ffab4aa6f356b514a9d3158b59aab700a311236f8"} Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.808163 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerStarted","Data":"dcdf674db3a717933ef61b4b228718afbf102de6aef7a1d4dcfe349fcd4ff1b6"} Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.809788 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.815823 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvlp2\" (UniqueName: \"kubernetes.io/projected/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-kube-api-access-lvlp2\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.815935 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-utilities\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.816324 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-catalog-content\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.816718 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-utilities\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.817003 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-catalog-content\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.837605 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvlp2\" (UniqueName: \"kubernetes.io/projected/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-kube-api-access-lvlp2\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.887693 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:10 crc kubenswrapper[4804]: I0128 12:37:10.193499 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46g75"] Jan 28 12:37:10 crc kubenswrapper[4804]: I0128 12:37:10.818181 4804 generic.go:334] "Generic (PLEG): container finished" podID="f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9" containerID="c58acf18235fd7df1170d9752645f7eb810deb7e0134c3cfa58e4d677a06e01a" exitCode=0 Jan 28 12:37:10 crc kubenswrapper[4804]: I0128 12:37:10.818300 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g75" event={"ID":"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9","Type":"ContainerDied","Data":"c58acf18235fd7df1170d9752645f7eb810deb7e0134c3cfa58e4d677a06e01a"} Jan 28 12:37:10 crc kubenswrapper[4804]: I0128 12:37:10.818562 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g75" event={"ID":"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9","Type":"ContainerStarted","Data":"f8bddb7e636123185da6486c3969316eb394e6023d208820f9123666e8a30726"} Jan 28 12:37:11 crc kubenswrapper[4804]: I0128 12:37:11.828357 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerStarted","Data":"e647a87f942c62635415ff89de2b9477ff0f1f887329894c48333d06fed69430"} Jan 28 12:37:11 crc kubenswrapper[4804]: I0128 12:37:11.915583 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:37:11 crc kubenswrapper[4804]: E0128 12:37:11.915819 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:37:12 crc kubenswrapper[4804]: I0128 12:37:12.838120 4804 generic.go:334] "Generic (PLEG): container finished" podID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerID="e647a87f942c62635415ff89de2b9477ff0f1f887329894c48333d06fed69430" exitCode=0 Jan 28 12:37:12 crc kubenswrapper[4804]: I0128 12:37:12.838176 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerDied","Data":"e647a87f942c62635415ff89de2b9477ff0f1f887329894c48333d06fed69430"} Jan 28 12:37:15 crc kubenswrapper[4804]: I0128 12:37:15.861865 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerStarted","Data":"844c3c2fc4360340d06d78304eb6b4ae9316e93e1e22c6fdfe05d77608bd17e7"} Jan 28 12:37:15 crc kubenswrapper[4804]: I0128 12:37:15.884449 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hmsxq" podStartSLOduration=2.578927496 podStartE2EDuration="7.884433695s" podCreationTimestamp="2026-01-28 12:37:08 +0000 UTC" firstStartedPulling="2026-01-28 12:37:09.809555899 +0000 UTC m=+4505.604435883" lastFinishedPulling="2026-01-28 12:37:15.115062098 +0000 UTC m=+4510.909942082" observedRunningTime="2026-01-28 12:37:15.884204558 +0000 UTC m=+4511.679084542" watchObservedRunningTime="2026-01-28 12:37:15.884433695 +0000 UTC m=+4511.679313679" Jan 28 12:37:16 crc kubenswrapper[4804]: I0128 12:37:16.868919 4804 generic.go:334] "Generic (PLEG): container finished" podID="f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9" containerID="90c5f0bbcbefe5088f0189f9caee3989fd1a20cf07489f8da5a3c183a3d9185c" exitCode=0 Jan 28 12:37:16 crc kubenswrapper[4804]: I0128 12:37:16.869042 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g75" event={"ID":"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9","Type":"ContainerDied","Data":"90c5f0bbcbefe5088f0189f9caee3989fd1a20cf07489f8da5a3c183a3d9185c"} Jan 28 12:37:17 crc kubenswrapper[4804]: I0128 12:37:17.879637 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g75" event={"ID":"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9","Type":"ContainerStarted","Data":"e5d55ae93d4218549d90610e7302adc6e228eb1b32b5a4968b4abcf1095f9d0d"} Jan 28 12:37:17 crc kubenswrapper[4804]: I0128 12:37:17.909520 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-46g75" podStartSLOduration=2.477683887 podStartE2EDuration="8.909502704s" podCreationTimestamp="2026-01-28 12:37:09 +0000 UTC" firstStartedPulling="2026-01-28 12:37:10.988910036 +0000 UTC m=+4506.783790040" lastFinishedPulling="2026-01-28 12:37:17.420728873 +0000 UTC m=+4513.215608857" observedRunningTime="2026-01-28 12:37:17.90585033 +0000 UTC m=+4513.700730334" watchObservedRunningTime="2026-01-28 12:37:17.909502704 +0000 UTC m=+4513.704382688" Jan 28 12:37:18 crc kubenswrapper[4804]: I0128 12:37:18.850098 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:18 crc kubenswrapper[4804]: I0128 12:37:18.850154 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.203539 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4h9f/must-gather-8j4f9"] Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.204937 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.211606 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c4h9f"/"openshift-service-ca.crt" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.212050 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-c4h9f"/"default-dockercfg-mdkfw" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.212182 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c4h9f"/"kube-root-ca.crt" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.214251 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4h9f/must-gather-8j4f9"] Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.253591 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctfdr\" (UniqueName: \"kubernetes.io/projected/0d220da7-e30a-4dde-9ae8-c10ada1875f8-kube-api-access-ctfdr\") pod \"must-gather-8j4f9\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.253681 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d220da7-e30a-4dde-9ae8-c10ada1875f8-must-gather-output\") pod \"must-gather-8j4f9\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.354560 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctfdr\" (UniqueName: \"kubernetes.io/projected/0d220da7-e30a-4dde-9ae8-c10ada1875f8-kube-api-access-ctfdr\") pod \"must-gather-8j4f9\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.354636 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d220da7-e30a-4dde-9ae8-c10ada1875f8-must-gather-output\") pod \"must-gather-8j4f9\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.355174 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d220da7-e30a-4dde-9ae8-c10ada1875f8-must-gather-output\") pod \"must-gather-8j4f9\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.390365 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctfdr\" (UniqueName: \"kubernetes.io/projected/0d220da7-e30a-4dde-9ae8-c10ada1875f8-kube-api-access-ctfdr\") pod \"must-gather-8j4f9\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.523408 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.887952 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.888357 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.891129 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hmsxq" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="registry-server" probeResult="failure" output=< Jan 28 12:37:19 crc kubenswrapper[4804]: timeout: failed to connect service ":50051" within 1s Jan 28 12:37:19 crc kubenswrapper[4804]: > Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.936316 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.940301 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4h9f/must-gather-8j4f9"] Jan 28 12:37:19 crc kubenswrapper[4804]: W0128 12:37:19.941814 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d220da7_e30a_4dde_9ae8_c10ada1875f8.slice/crio-33d34ee92faf50587b7e65e2f7e9616c9c64a31e488d12355c415a96188619c9 WatchSource:0}: Error finding container 33d34ee92faf50587b7e65e2f7e9616c9c64a31e488d12355c415a96188619c9: Status 404 returned error can't find the container with id 33d34ee92faf50587b7e65e2f7e9616c9c64a31e488d12355c415a96188619c9 Jan 28 12:37:20 crc kubenswrapper[4804]: I0128 12:37:20.909409 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" event={"ID":"0d220da7-e30a-4dde-9ae8-c10ada1875f8","Type":"ContainerStarted","Data":"33d34ee92faf50587b7e65e2f7e9616c9c64a31e488d12355c415a96188619c9"} Jan 28 12:37:26 crc kubenswrapper[4804]: I0128 12:37:26.915405 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:37:26 crc kubenswrapper[4804]: E0128 12:37:26.916177 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:37:28 crc kubenswrapper[4804]: I0128 12:37:28.890703 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:28 crc kubenswrapper[4804]: I0128 12:37:28.939947 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:29 crc kubenswrapper[4804]: I0128 12:37:29.933064 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:29 crc kubenswrapper[4804]: I0128 12:37:29.984517 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" event={"ID":"0d220da7-e30a-4dde-9ae8-c10ada1875f8","Type":"ContainerStarted","Data":"c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f"} Jan 28 12:37:30 crc kubenswrapper[4804]: I0128 12:37:30.362128 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46g75"] Jan 28 12:37:30 crc kubenswrapper[4804]: I0128 12:37:30.531157 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmsxq"] Jan 28 12:37:30 crc kubenswrapper[4804]: I0128 12:37:30.531411 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hmsxq" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="registry-server" containerID="cri-o://844c3c2fc4360340d06d78304eb6b4ae9316e93e1e22c6fdfe05d77608bd17e7" gracePeriod=2 Jan 28 12:37:30 crc kubenswrapper[4804]: I0128 12:37:30.720073 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8n6zc"] Jan 28 12:37:30 crc kubenswrapper[4804]: I0128 12:37:30.720922 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8n6zc" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="registry-server" containerID="cri-o://8097ea45070d38453a6edb261d8ee6d04408f9d4cf265b8d012cfbbcf0aab862" gracePeriod=2 Jan 28 12:37:30 crc kubenswrapper[4804]: I0128 12:37:30.993582 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" event={"ID":"0d220da7-e30a-4dde-9ae8-c10ada1875f8","Type":"ContainerStarted","Data":"d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480"} Jan 28 12:37:31 crc kubenswrapper[4804]: I0128 12:37:31.013237 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" podStartSLOduration=2.311081938 podStartE2EDuration="12.01320827s" podCreationTimestamp="2026-01-28 12:37:19 +0000 UTC" firstStartedPulling="2026-01-28 12:37:19.944227674 +0000 UTC m=+4515.739107658" lastFinishedPulling="2026-01-28 12:37:29.646354006 +0000 UTC m=+4525.441233990" observedRunningTime="2026-01-28 12:37:31.007717708 +0000 UTC m=+4526.802597682" watchObservedRunningTime="2026-01-28 12:37:31.01320827 +0000 UTC m=+4526.808088254" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.004120 4804 generic.go:334] "Generic (PLEG): container finished" podID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerID="844c3c2fc4360340d06d78304eb6b4ae9316e93e1e22c6fdfe05d77608bd17e7" exitCode=0 Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.004196 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerDied","Data":"844c3c2fc4360340d06d78304eb6b4ae9316e93e1e22c6fdfe05d77608bd17e7"} Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.007005 4804 generic.go:334] "Generic (PLEG): container finished" podID="477f5ec7-c491-494c-add6-a233798ffdfa" containerID="8097ea45070d38453a6edb261d8ee6d04408f9d4cf265b8d012cfbbcf0aab862" exitCode=0 Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.007070 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n6zc" event={"ID":"477f5ec7-c491-494c-add6-a233798ffdfa","Type":"ContainerDied","Data":"8097ea45070d38453a6edb261d8ee6d04408f9d4cf265b8d012cfbbcf0aab862"} Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.543987 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.574835 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-catalog-content\") pod \"477f5ec7-c491-494c-add6-a233798ffdfa\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.574991 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz5ns\" (UniqueName: \"kubernetes.io/projected/477f5ec7-c491-494c-add6-a233798ffdfa-kube-api-access-jz5ns\") pod \"477f5ec7-c491-494c-add6-a233798ffdfa\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.576071 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-utilities\") pod \"477f5ec7-c491-494c-add6-a233798ffdfa\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.576425 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-utilities" (OuterVolumeSpecName: "utilities") pod "477f5ec7-c491-494c-add6-a233798ffdfa" (UID: "477f5ec7-c491-494c-add6-a233798ffdfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.591705 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477f5ec7-c491-494c-add6-a233798ffdfa-kube-api-access-jz5ns" (OuterVolumeSpecName: "kube-api-access-jz5ns") pod "477f5ec7-c491-494c-add6-a233798ffdfa" (UID: "477f5ec7-c491-494c-add6-a233798ffdfa"). InnerVolumeSpecName "kube-api-access-jz5ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.641282 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "477f5ec7-c491-494c-add6-a233798ffdfa" (UID: "477f5ec7-c491-494c-add6-a233798ffdfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.680947 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.681399 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.681417 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz5ns\" (UniqueName: \"kubernetes.io/projected/477f5ec7-c491-494c-add6-a233798ffdfa-kube-api-access-jz5ns\") on node \"crc\" DevicePath \"\"" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.025109 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n6zc" event={"ID":"477f5ec7-c491-494c-add6-a233798ffdfa","Type":"ContainerDied","Data":"5666d32cb1791e32eb7a0f138a32a6994ac7508322ed412acb7afd87a03dcb18"} Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.025168 4804 scope.go:117] "RemoveContainer" containerID="8097ea45070d38453a6edb261d8ee6d04408f9d4cf265b8d012cfbbcf0aab862" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.025291 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.057128 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8n6zc"] Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.067738 4804 scope.go:117] "RemoveContainer" containerID="5eeef8445a28c47bafd383bf532c0bbf3abc3e3acbe80741d1fb008b29abd5a7" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.076111 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8n6zc"] Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.456507 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.465042 4804 scope.go:117] "RemoveContainer" containerID="97869d81e8512d2767849c948a0eaf69907f795ddaf291cb6977a857a679da98" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.597077 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-catalog-content\") pod \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.597252 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-utilities\") pod \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.597355 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh9r9\" (UniqueName: \"kubernetes.io/projected/d6565976-3a91-4cc5-9fb6-e564382fdf6e-kube-api-access-xh9r9\") pod \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.598323 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-utilities" (OuterVolumeSpecName: "utilities") pod "d6565976-3a91-4cc5-9fb6-e564382fdf6e" (UID: "d6565976-3a91-4cc5-9fb6-e564382fdf6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.606137 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6565976-3a91-4cc5-9fb6-e564382fdf6e-kube-api-access-xh9r9" (OuterVolumeSpecName: "kube-api-access-xh9r9") pod "d6565976-3a91-4cc5-9fb6-e564382fdf6e" (UID: "d6565976-3a91-4cc5-9fb6-e564382fdf6e"). InnerVolumeSpecName "kube-api-access-xh9r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.699230 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.699595 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh9r9\" (UniqueName: \"kubernetes.io/projected/d6565976-3a91-4cc5-9fb6-e564382fdf6e-kube-api-access-xh9r9\") on node \"crc\" DevicePath \"\"" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.740483 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6565976-3a91-4cc5-9fb6-e564382fdf6e" (UID: "d6565976-3a91-4cc5-9fb6-e564382fdf6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.801399 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.035315 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerDied","Data":"dcdf674db3a717933ef61b4b228718afbf102de6aef7a1d4dcfe349fcd4ff1b6"} Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.035377 4804 scope.go:117] "RemoveContainer" containerID="844c3c2fc4360340d06d78304eb6b4ae9316e93e1e22c6fdfe05d77608bd17e7" Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.035329 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.064237 4804 scope.go:117] "RemoveContainer" containerID="e647a87f942c62635415ff89de2b9477ff0f1f887329894c48333d06fed69430" Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.070950 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmsxq"] Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.079106 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hmsxq"] Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.091103 4804 scope.go:117] "RemoveContainer" containerID="60fe76a65de41cce8c367c6ffab4aa6f356b514a9d3158b59aab700a311236f8" Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.925906 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" path="/var/lib/kubelet/pods/477f5ec7-c491-494c-add6-a233798ffdfa/volumes" Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.926837 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" path="/var/lib/kubelet/pods/d6565976-3a91-4cc5-9fb6-e564382fdf6e/volumes" Jan 28 12:37:38 crc kubenswrapper[4804]: I0128 12:37:38.915959 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:37:38 crc kubenswrapper[4804]: E0128 12:37:38.916691 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:37:52 crc kubenswrapper[4804]: I0128 12:37:52.916584 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:37:52 crc kubenswrapper[4804]: E0128 12:37:52.918526 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:38:06 crc kubenswrapper[4804]: I0128 12:38:06.915160 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:38:06 crc kubenswrapper[4804]: E0128 12:38:06.916063 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:38:21 crc kubenswrapper[4804]: I0128 12:38:21.915711 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:38:21 crc kubenswrapper[4804]: E0128 12:38:21.916476 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:38:32 crc kubenswrapper[4804]: I0128 12:38:32.957805 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/util/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.149981 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/util/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.167078 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/pull/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.167145 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/pull/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.339727 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/extract/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.354588 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/pull/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.374970 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/util/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.572822 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-vjb6d_c36b33fc-3ff6-4c44-a079-bc48a5a3d509/manager/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.645006 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-j5j86_db8796b2-e360-4287-9ba2-4ceda6de770e/manager/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.695378 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-fbggh_b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048/manager/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.886818 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-qz2dl_186e63a0-88e6-404b-963c-e5cb22485277/manager/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.899699 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-hxv8b_acdcc5e8-c284-444e-86c2-96aec766b35b/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.183360 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-fw9dq_ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.464235 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-wb5k2_f75f08ff-7d3c-4fb4-a366-1c996771a71d/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.520336 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-k6rzx_e770ba97-59e1-4752-8e93-bc7d53ff7c04/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.630947 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-wl5w5_ec1046a1-b834-40e4-b82a-923885428171/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.734763 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-s92b7_d5ce0c1e-3061-46ed-a816-3839144b160a/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.859676 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-7dg9l_07990c6c-3350-45a8-85de-1e0db97acb07/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.919827 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:38:34 crc kubenswrapper[4804]: E0128 12:38:34.920059 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.980145 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-n9kpn_b79b961c-583d-4e78-8513-c44ed292c325/manager/0.log" Jan 28 12:38:35 crc kubenswrapper[4804]: I0128 12:38:35.115097 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-m5xng_8f1a2428-c6c8-4113-9654-0c58ab91b45b/manager/0.log" Jan 28 12:38:35 crc kubenswrapper[4804]: I0128 12:38:35.182739 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-dndv5_8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1/manager/0.log" Jan 28 12:38:35 crc kubenswrapper[4804]: I0128 12:38:35.262948 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg_a26075bd-4d23-463a-abe8-575a02ebc9ad/manager/0.log" Jan 28 12:38:35 crc kubenswrapper[4804]: I0128 12:38:35.481766 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-cdb5b4f99-hxlm9_134135c7-1032-47aa-b0bd-361463826caf/operator/0.log" Jan 28 12:38:35 crc kubenswrapper[4804]: I0128 12:38:35.666858 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cmjpc_d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec/registry-server/0.log" Jan 28 12:38:35 crc kubenswrapper[4804]: I0128 12:38:35.938688 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-4cpk5_7ab2436a-1b54-4c5e-bdc1-959026660c98/manager/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.045459 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-bfl45_deece2f8-8c1c-4599-80f4-44e6ec055a18/manager/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.205055 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-cqlch_69938639-9ff0-433c-bd73-8d129935e7d4/operator/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.302719 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6548796f98-5pssc_58f748c2-ceb6-4d34-8a2e-8227e59ef560/manager/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.400686 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-fwd68_eb1c01a9-6548-49cd-8e1f-4f01daaff754/manager/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.512984 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-2hdgj_23a10136-5079-4838-adf9-6512ccfd5f2c/manager/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.587969 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-9vgvb_ff35634f-2b61-44e4-934a-74b39c5b7335/manager/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.676561 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-659wf_67fbb1e9-d718-4075-971a-33a245c498e3/manager/0.log" Jan 28 12:38:49 crc kubenswrapper[4804]: I0128 12:38:49.915274 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:38:49 crc kubenswrapper[4804]: E0128 12:38:49.916175 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:38:54 crc kubenswrapper[4804]: I0128 12:38:54.059635 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-f822b_c03ebf08-d5a0-48b4-a1ca-3eec30c14490/control-plane-machine-set-operator/0.log" Jan 28 12:38:54 crc kubenswrapper[4804]: I0128 12:38:54.235787 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-m5p7p_e2b8b707-60c9-4138-a4d8-d218162737fe/kube-rbac-proxy/0.log" Jan 28 12:38:54 crc kubenswrapper[4804]: I0128 12:38:54.276564 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-m5p7p_e2b8b707-60c9-4138-a4d8-d218162737fe/machine-api-operator/0.log" Jan 28 12:39:03 crc kubenswrapper[4804]: I0128 12:39:03.915262 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:39:03 crc kubenswrapper[4804]: E0128 12:39:03.916022 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:39:05 crc kubenswrapper[4804]: I0128 12:39:05.385044 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-hkwds_4da2c74c-883d-4690-bb94-a34b198ccf89/cert-manager-controller/0.log" Jan 28 12:39:05 crc kubenswrapper[4804]: I0128 12:39:05.539328 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-pgj92_47a0c933-7194-403d-8345-446cc9941fa5/cert-manager-cainjector/0.log" Jan 28 12:39:05 crc kubenswrapper[4804]: I0128 12:39:05.557942 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-cjsz8_dd7c8a18-36d1-45d5-aaf5-daff9b218438/cert-manager-webhook/0.log" Jan 28 12:39:16 crc kubenswrapper[4804]: I0128 12:39:16.737167 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-bbn52_77313f93-489e-4da6-81bb-eec0c795e242/nmstate-console-plugin/0.log" Jan 28 12:39:16 crc kubenswrapper[4804]: I0128 12:39:16.876169 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-r6vm7_a741d157-784a-4e3e-9e35-200d91f3aa47/nmstate-handler/0.log" Jan 28 12:39:16 crc kubenswrapper[4804]: I0128 12:39:16.914442 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b2pq8_b63500d6-29e0-4eef-82cd-fdc0036ef0f2/kube-rbac-proxy/0.log" Jan 28 12:39:16 crc kubenswrapper[4804]: I0128 12:39:16.974921 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b2pq8_b63500d6-29e0-4eef-82cd-fdc0036ef0f2/nmstate-metrics/0.log" Jan 28 12:39:17 crc kubenswrapper[4804]: I0128 12:39:17.076162 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-hzhkh_d478ae3c-a9f5-4f6e-ae30-1bd80027de73/nmstate-operator/0.log" Jan 28 12:39:17 crc kubenswrapper[4804]: I0128 12:39:17.153917 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-c5t8z_c17b2105-0264-4cf3-8204-e68ba577728e/nmstate-webhook/0.log" Jan 28 12:39:18 crc kubenswrapper[4804]: I0128 12:39:18.915927 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:39:19 crc kubenswrapper[4804]: I0128 12:39:19.877434 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"3d962111bbfc2c2fbb1c4945c75c4824c37b7d2f4b777899d9eff306e2dd21a4"} Jan 28 12:39:41 crc kubenswrapper[4804]: I0128 12:39:41.965177 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rfhfx_1ae74e9e-799f-46bb-9a53-c8307c83203d/kube-rbac-proxy/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.222145 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-frr-files/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.309972 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rfhfx_1ae74e9e-799f-46bb-9a53-c8307c83203d/controller/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.404217 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-reloader/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.436929 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-frr-files/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.437637 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-metrics/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.477975 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-reloader/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.643668 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-frr-files/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.665559 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-reloader/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.668307 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-metrics/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.686693 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-metrics/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.820576 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-frr-files/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.838258 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-reloader/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.865983 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-metrics/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.870028 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/controller/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.012626 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/frr-metrics/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.073502 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/kube-rbac-proxy/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.098481 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/kube-rbac-proxy-frr/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.189641 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t6tht"] Jan 28 12:39:43 crc kubenswrapper[4804]: E0128 12:39:43.190034 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="extract-utilities" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190055 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="extract-utilities" Jan 28 12:39:43 crc kubenswrapper[4804]: E0128 12:39:43.190089 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="registry-server" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190097 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="registry-server" Jan 28 12:39:43 crc kubenswrapper[4804]: E0128 12:39:43.190117 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="extract-utilities" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190126 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="extract-utilities" Jan 28 12:39:43 crc kubenswrapper[4804]: E0128 12:39:43.190139 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="registry-server" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190145 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="registry-server" Jan 28 12:39:43 crc kubenswrapper[4804]: E0128 12:39:43.190158 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="extract-content" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190163 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="extract-content" Jan 28 12:39:43 crc kubenswrapper[4804]: E0128 12:39:43.190174 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="extract-content" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190180 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="extract-content" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190342 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="registry-server" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190358 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="registry-server" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.191541 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.203979 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t6tht"] Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.246249 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/reloader/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.338590 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-catalog-content\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.338680 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-utilities\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.338896 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djgj4\" (UniqueName: \"kubernetes.io/projected/d83e9c26-344d-455c-bb51-d378c8016381-kube-api-access-djgj4\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.410709 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-cvlt6_3ce00c89-f00d-43aa-9907-77bf331c3dbd/frr-k8s-webhook-server/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.439658 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-utilities\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.439732 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djgj4\" (UniqueName: \"kubernetes.io/projected/d83e9c26-344d-455c-bb51-d378c8016381-kube-api-access-djgj4\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.439818 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-catalog-content\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.440179 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-utilities\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.440227 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-catalog-content\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.458619 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djgj4\" (UniqueName: \"kubernetes.io/projected/d83e9c26-344d-455c-bb51-d378c8016381-kube-api-access-djgj4\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.513352 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.745486 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b85b59588-rf4wr_a0eda12d-b723-4a3a-8f2b-916de07b279c/manager/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.926179 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b844cd4fc-mn427_13606290-8fc4-4792-a328-207ee9a1994e/webhook-server/0.log" Jan 28 12:39:44 crc kubenswrapper[4804]: I0128 12:39:44.067118 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t6tht"] Jan 28 12:39:44 crc kubenswrapper[4804]: I0128 12:39:44.181726 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/frr/0.log" Jan 28 12:39:44 crc kubenswrapper[4804]: I0128 12:39:44.220904 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kcvj8_2fa1df7e-03c8-4931-ad89-222acae36030/kube-rbac-proxy/0.log" Jan 28 12:39:44 crc kubenswrapper[4804]: I0128 12:39:44.617857 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kcvj8_2fa1df7e-03c8-4931-ad89-222acae36030/speaker/0.log" Jan 28 12:39:45 crc kubenswrapper[4804]: I0128 12:39:45.034272 4804 generic.go:334] "Generic (PLEG): container finished" podID="d83e9c26-344d-455c-bb51-d378c8016381" containerID="2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17" exitCode=0 Jan 28 12:39:45 crc kubenswrapper[4804]: I0128 12:39:45.034321 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerDied","Data":"2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17"} Jan 28 12:39:45 crc kubenswrapper[4804]: I0128 12:39:45.034349 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerStarted","Data":"78338d6312df0542bef470ab0fb2807b259b1b7b922c83ea5be1638d62c31969"} Jan 28 12:39:46 crc kubenswrapper[4804]: I0128 12:39:46.042290 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerStarted","Data":"1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0"} Jan 28 12:39:47 crc kubenswrapper[4804]: I0128 12:39:47.050037 4804 generic.go:334] "Generic (PLEG): container finished" podID="d83e9c26-344d-455c-bb51-d378c8016381" containerID="1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0" exitCode=0 Jan 28 12:39:47 crc kubenswrapper[4804]: I0128 12:39:47.050089 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerDied","Data":"1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0"} Jan 28 12:39:48 crc kubenswrapper[4804]: I0128 12:39:48.059840 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerStarted","Data":"57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd"} Jan 28 12:39:48 crc kubenswrapper[4804]: I0128 12:39:48.083484 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t6tht" podStartSLOduration=2.539196473 podStartE2EDuration="5.083466403s" podCreationTimestamp="2026-01-28 12:39:43 +0000 UTC" firstStartedPulling="2026-01-28 12:39:45.036207909 +0000 UTC m=+4660.831087893" lastFinishedPulling="2026-01-28 12:39:47.580477839 +0000 UTC m=+4663.375357823" observedRunningTime="2026-01-28 12:39:48.078691455 +0000 UTC m=+4663.873571449" watchObservedRunningTime="2026-01-28 12:39:48.083466403 +0000 UTC m=+4663.878346387" Jan 28 12:39:53 crc kubenswrapper[4804]: I0128 12:39:53.514342 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:53 crc kubenswrapper[4804]: I0128 12:39:53.514866 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:53 crc kubenswrapper[4804]: I0128 12:39:53.562913 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:54 crc kubenswrapper[4804]: I0128 12:39:54.138020 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:54 crc kubenswrapper[4804]: I0128 12:39:54.177895 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t6tht"] Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.114159 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t6tht" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="registry-server" containerID="cri-o://57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd" gracePeriod=2 Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.581867 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.621603 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-utilities\") pod \"d83e9c26-344d-455c-bb51-d378c8016381\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.621667 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-catalog-content\") pod \"d83e9c26-344d-455c-bb51-d378c8016381\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.621695 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djgj4\" (UniqueName: \"kubernetes.io/projected/d83e9c26-344d-455c-bb51-d378c8016381-kube-api-access-djgj4\") pod \"d83e9c26-344d-455c-bb51-d378c8016381\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.622746 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-utilities" (OuterVolumeSpecName: "utilities") pod "d83e9c26-344d-455c-bb51-d378c8016381" (UID: "d83e9c26-344d-455c-bb51-d378c8016381"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.629969 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83e9c26-344d-455c-bb51-d378c8016381-kube-api-access-djgj4" (OuterVolumeSpecName: "kube-api-access-djgj4") pod "d83e9c26-344d-455c-bb51-d378c8016381" (UID: "d83e9c26-344d-455c-bb51-d378c8016381"). InnerVolumeSpecName "kube-api-access-djgj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.651196 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/util/0.log" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.700658 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d83e9c26-344d-455c-bb51-d378c8016381" (UID: "d83e9c26-344d-455c-bb51-d378c8016381"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.723218 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.723259 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djgj4\" (UniqueName: \"kubernetes.io/projected/d83e9c26-344d-455c-bb51-d378c8016381-kube-api-access-djgj4\") on node \"crc\" DevicePath \"\"" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.723275 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.807942 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/util/0.log" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.839210 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/pull/0.log" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.922601 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/pull/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.048042 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/util/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.081384 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/pull/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.087019 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/extract/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.121858 4804 generic.go:334] "Generic (PLEG): container finished" podID="d83e9c26-344d-455c-bb51-d378c8016381" containerID="57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd" exitCode=0 Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.121928 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerDied","Data":"57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd"} Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.121961 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerDied","Data":"78338d6312df0542bef470ab0fb2807b259b1b7b922c83ea5be1638d62c31969"} Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.121978 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.121985 4804 scope.go:117] "RemoveContainer" containerID="57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.140792 4804 scope.go:117] "RemoveContainer" containerID="1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.144837 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t6tht"] Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.153457 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t6tht"] Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.170592 4804 scope.go:117] "RemoveContainer" containerID="2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.191348 4804 scope.go:117] "RemoveContainer" containerID="57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd" Jan 28 12:39:57 crc kubenswrapper[4804]: E0128 12:39:57.191808 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd\": container with ID starting with 57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd not found: ID does not exist" containerID="57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.191861 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd"} err="failed to get container status \"57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd\": rpc error: code = NotFound desc = could not find container \"57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd\": container with ID starting with 57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd not found: ID does not exist" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.191904 4804 scope.go:117] "RemoveContainer" containerID="1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0" Jan 28 12:39:57 crc kubenswrapper[4804]: E0128 12:39:57.192378 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0\": container with ID starting with 1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0 not found: ID does not exist" containerID="1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.192414 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0"} err="failed to get container status \"1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0\": rpc error: code = NotFound desc = could not find container \"1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0\": container with ID starting with 1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0 not found: ID does not exist" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.192445 4804 scope.go:117] "RemoveContainer" containerID="2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17" Jan 28 12:39:57 crc kubenswrapper[4804]: E0128 12:39:57.195024 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17\": container with ID starting with 2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17 not found: ID does not exist" containerID="2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.195066 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17"} err="failed to get container status \"2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17\": rpc error: code = NotFound desc = could not find container \"2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17\": container with ID starting with 2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17 not found: ID does not exist" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.246229 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/util/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.380169 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/pull/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.404129 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/util/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.404615 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/pull/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.562346 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/util/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.566588 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/extract/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.588773 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/pull/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.984382 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/util/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.138342 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/pull/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.155067 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/pull/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.175981 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/util/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.323606 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/util/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.353310 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/extract/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.353562 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/pull/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.492293 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/extract-utilities/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.663709 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/extract-utilities/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.671349 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/extract-content/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.685493 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/extract-content/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.819807 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/extract-content/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.838377 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/extract-utilities/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.923215 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83e9c26-344d-455c-bb51-d378c8016381" path="/var/lib/kubelet/pods/d83e9c26-344d-455c-bb51-d378c8016381/volumes" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.944702 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/registry-server/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.354901 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/extract-utilities/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.460762 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/extract-utilities/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.494425 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/extract-content/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.516733 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/extract-content/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.682687 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/extract-utilities/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.718577 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/extract-content/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.876763 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-s76k6_349fc9e3-a236-44fd-b7b9-ee08f25c58fd/marketplace-operator/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.978896 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/extract-utilities/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.107363 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/extract-utilities/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.151049 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/extract-content/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.163647 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/extract-content/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.359606 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/registry-server/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.360755 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/extract-utilities/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.373305 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/extract-content/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.518591 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/registry-server/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.550242 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/extract-utilities/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.662262 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/extract-utilities/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.715186 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/extract-content/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.715793 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/extract-content/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.846883 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/extract-utilities/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.869836 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/extract-content/0.log" Jan 28 12:40:01 crc kubenswrapper[4804]: I0128 12:40:01.504032 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/registry-server/0.log" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.685347 4804 generic.go:334] "Generic (PLEG): container finished" podID="0d220da7-e30a-4dde-9ae8-c10ada1875f8" containerID="c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f" exitCode=0 Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.685448 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" event={"ID":"0d220da7-e30a-4dde-9ae8-c10ada1875f8","Type":"ContainerDied","Data":"c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f"} Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.686648 4804 scope.go:117] "RemoveContainer" containerID="c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.842529 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nlwck"] Jan 28 12:41:16 crc kubenswrapper[4804]: E0128 12:41:16.842869 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="registry-server" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.842906 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="registry-server" Jan 28 12:41:16 crc kubenswrapper[4804]: E0128 12:41:16.842938 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="extract-content" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.842947 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="extract-content" Jan 28 12:41:16 crc kubenswrapper[4804]: E0128 12:41:16.842964 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="extract-utilities" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.842971 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="extract-utilities" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.843151 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="registry-server" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.844125 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.864248 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlwck"] Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.915944 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c4h9f_must-gather-8j4f9_0d220da7-e30a-4dde-9ae8-c10ada1875f8/gather/0.log" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.942623 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch26l\" (UniqueName: \"kubernetes.io/projected/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-kube-api-access-ch26l\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.942818 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-utilities\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.942868 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-catalog-content\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.044273 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-utilities\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.044324 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-catalog-content\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.044409 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch26l\" (UniqueName: \"kubernetes.io/projected/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-kube-api-access-ch26l\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.045096 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-utilities\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.045261 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-catalog-content\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.067065 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch26l\" (UniqueName: \"kubernetes.io/projected/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-kube-api-access-ch26l\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.167097 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.628187 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlwck"] Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.695552 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlwck" event={"ID":"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49","Type":"ContainerStarted","Data":"922433d8b0899e9096a4bfb7dca7688b52a594e4b98b56b30120e33078c90694"} Jan 28 12:41:18 crc kubenswrapper[4804]: I0128 12:41:18.741565 4804 generic.go:334] "Generic (PLEG): container finished" podID="008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" containerID="bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a" exitCode=0 Jan 28 12:41:18 crc kubenswrapper[4804]: I0128 12:41:18.741653 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlwck" event={"ID":"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49","Type":"ContainerDied","Data":"bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a"} Jan 28 12:41:19 crc kubenswrapper[4804]: I0128 12:41:19.750343 4804 generic.go:334] "Generic (PLEG): container finished" podID="008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" containerID="06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79" exitCode=0 Jan 28 12:41:19 crc kubenswrapper[4804]: I0128 12:41:19.750453 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlwck" event={"ID":"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49","Type":"ContainerDied","Data":"06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79"} Jan 28 12:41:20 crc kubenswrapper[4804]: I0128 12:41:20.758652 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlwck" event={"ID":"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49","Type":"ContainerStarted","Data":"c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c"} Jan 28 12:41:20 crc kubenswrapper[4804]: I0128 12:41:20.777427 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nlwck" podStartSLOduration=3.377320085 podStartE2EDuration="4.777408158s" podCreationTimestamp="2026-01-28 12:41:16 +0000 UTC" firstStartedPulling="2026-01-28 12:41:18.743659138 +0000 UTC m=+4754.538539122" lastFinishedPulling="2026-01-28 12:41:20.143747211 +0000 UTC m=+4755.938627195" observedRunningTime="2026-01-28 12:41:20.773794805 +0000 UTC m=+4756.568674789" watchObservedRunningTime="2026-01-28 12:41:20.777408158 +0000 UTC m=+4756.572288142" Jan 28 12:41:23 crc kubenswrapper[4804]: I0128 12:41:23.873649 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c4h9f/must-gather-8j4f9"] Jan 28 12:41:23 crc kubenswrapper[4804]: I0128 12:41:23.874339 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" podUID="0d220da7-e30a-4dde-9ae8-c10ada1875f8" containerName="copy" containerID="cri-o://d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480" gracePeriod=2 Jan 28 12:41:23 crc kubenswrapper[4804]: I0128 12:41:23.879992 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c4h9f/must-gather-8j4f9"] Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.231307 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c4h9f_must-gather-8j4f9_0d220da7-e30a-4dde-9ae8-c10ada1875f8/copy/0.log" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.232550 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.350706 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d220da7-e30a-4dde-9ae8-c10ada1875f8-must-gather-output\") pod \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.351005 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctfdr\" (UniqueName: \"kubernetes.io/projected/0d220da7-e30a-4dde-9ae8-c10ada1875f8-kube-api-access-ctfdr\") pod \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.356397 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d220da7-e30a-4dde-9ae8-c10ada1875f8-kube-api-access-ctfdr" (OuterVolumeSpecName: "kube-api-access-ctfdr") pod "0d220da7-e30a-4dde-9ae8-c10ada1875f8" (UID: "0d220da7-e30a-4dde-9ae8-c10ada1875f8"). InnerVolumeSpecName "kube-api-access-ctfdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.441348 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d220da7-e30a-4dde-9ae8-c10ada1875f8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0d220da7-e30a-4dde-9ae8-c10ada1875f8" (UID: "0d220da7-e30a-4dde-9ae8-c10ada1875f8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.452898 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctfdr\" (UniqueName: \"kubernetes.io/projected/0d220da7-e30a-4dde-9ae8-c10ada1875f8-kube-api-access-ctfdr\") on node \"crc\" DevicePath \"\"" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.452946 4804 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d220da7-e30a-4dde-9ae8-c10ada1875f8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.788357 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c4h9f_must-gather-8j4f9_0d220da7-e30a-4dde-9ae8-c10ada1875f8/copy/0.log" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.789063 4804 generic.go:334] "Generic (PLEG): container finished" podID="0d220da7-e30a-4dde-9ae8-c10ada1875f8" containerID="d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480" exitCode=143 Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.789130 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.789141 4804 scope.go:117] "RemoveContainer" containerID="d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.806664 4804 scope.go:117] "RemoveContainer" containerID="c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.868470 4804 scope.go:117] "RemoveContainer" containerID="d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480" Jan 28 12:41:24 crc kubenswrapper[4804]: E0128 12:41:24.868834 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480\": container with ID starting with d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480 not found: ID does not exist" containerID="d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.868865 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480"} err="failed to get container status \"d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480\": rpc error: code = NotFound desc = could not find container \"d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480\": container with ID starting with d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480 not found: ID does not exist" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.868912 4804 scope.go:117] "RemoveContainer" containerID="c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f" Jan 28 12:41:24 crc kubenswrapper[4804]: E0128 12:41:24.869202 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f\": container with ID starting with c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f not found: ID does not exist" containerID="c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.869235 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f"} err="failed to get container status \"c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f\": rpc error: code = NotFound desc = could not find container \"c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f\": container with ID starting with c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f not found: ID does not exist" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.925547 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d220da7-e30a-4dde-9ae8-c10ada1875f8" path="/var/lib/kubelet/pods/0d220da7-e30a-4dde-9ae8-c10ada1875f8/volumes" Jan 28 12:41:27 crc kubenswrapper[4804]: I0128 12:41:27.167944 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:27 crc kubenswrapper[4804]: I0128 12:41:27.168501 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:27 crc kubenswrapper[4804]: I0128 12:41:27.210485 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:27 crc kubenswrapper[4804]: I0128 12:41:27.851613 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:27 crc kubenswrapper[4804]: I0128 12:41:27.901933 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlwck"] Jan 28 12:41:29 crc kubenswrapper[4804]: I0128 12:41:29.823133 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nlwck" podUID="008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" containerName="registry-server" containerID="cri-o://c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c" gracePeriod=2 Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.277610 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.348571 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-utilities\") pod \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.348677 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-catalog-content\") pod \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.348946 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch26l\" (UniqueName: \"kubernetes.io/projected/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-kube-api-access-ch26l\") pod \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.350858 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-utilities" (OuterVolumeSpecName: "utilities") pod "008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" (UID: "008dc2bf-7f07-41f9-88c3-b32ee3ec2b49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.355412 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-kube-api-access-ch26l" (OuterVolumeSpecName: "kube-api-access-ch26l") pod "008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" (UID: "008dc2bf-7f07-41f9-88c3-b32ee3ec2b49"). InnerVolumeSpecName "kube-api-access-ch26l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.371965 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" (UID: "008dc2bf-7f07-41f9-88c3-b32ee3ec2b49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.451252 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.451542 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.451632 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch26l\" (UniqueName: \"kubernetes.io/projected/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-kube-api-access-ch26l\") on node \"crc\" DevicePath \"\"" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.831244 4804 generic.go:334] "Generic (PLEG): container finished" podID="008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" containerID="c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c" exitCode=0 Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.831291 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlwck" event={"ID":"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49","Type":"ContainerDied","Data":"c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c"} Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.831334 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlwck" event={"ID":"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49","Type":"ContainerDied","Data":"922433d8b0899e9096a4bfb7dca7688b52a594e4b98b56b30120e33078c90694"} Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.831336 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.831355 4804 scope.go:117] "RemoveContainer" containerID="c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.850122 4804 scope.go:117] "RemoveContainer" containerID="06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.866099 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlwck"] Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.872266 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlwck"] Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.893468 4804 scope.go:117] "RemoveContainer" containerID="bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.908260 4804 scope.go:117] "RemoveContainer" containerID="c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c" Jan 28 12:41:30 crc kubenswrapper[4804]: E0128 12:41:30.908694 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c\": container with ID starting with c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c not found: ID does not exist" containerID="c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.908735 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c"} err="failed to get container status \"c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c\": rpc error: code = NotFound desc = could not find container \"c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c\": container with ID starting with c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c not found: ID does not exist" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.908761 4804 scope.go:117] "RemoveContainer" containerID="06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79" Jan 28 12:41:30 crc kubenswrapper[4804]: E0128 12:41:30.909189 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79\": container with ID starting with 06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79 not found: ID does not exist" containerID="06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.909231 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79"} err="failed to get container status \"06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79\": rpc error: code = NotFound desc = could not find container \"06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79\": container with ID starting with 06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79 not found: ID does not exist" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.909260 4804 scope.go:117] "RemoveContainer" containerID="bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a" Jan 28 12:41:30 crc kubenswrapper[4804]: E0128 12:41:30.909803 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a\": container with ID starting with bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a not found: ID does not exist" containerID="bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.909844 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a"} err="failed to get container status \"bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a\": rpc error: code = NotFound desc = could not find container \"bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a\": container with ID starting with bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a not found: ID does not exist" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.923948 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" path="/var/lib/kubelet/pods/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49/volumes" Jan 28 12:41:42 crc kubenswrapper[4804]: I0128 12:41:42.582424 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:41:42 crc kubenswrapper[4804]: I0128 12:41:42.583186 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:41:58 crc kubenswrapper[4804]: E0128 12:41:58.263158 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/systemd-hostnamed.service\": RecentStats: unable to find data in memory cache]" Jan 28 12:42:12 crc kubenswrapper[4804]: I0128 12:42:12.581712 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:42:12 crc kubenswrapper[4804]: I0128 12:42:12.582272 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:42:42 crc kubenswrapper[4804]: I0128 12:42:42.582633 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:42:42 crc kubenswrapper[4804]: I0128 12:42:42.583346 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:42:42 crc kubenswrapper[4804]: I0128 12:42:42.583412 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:42:42 crc kubenswrapper[4804]: I0128 12:42:42.584154 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d962111bbfc2c2fbb1c4945c75c4824c37b7d2f4b777899d9eff306e2dd21a4"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:42:42 crc kubenswrapper[4804]: I0128 12:42:42.584235 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://3d962111bbfc2c2fbb1c4945c75c4824c37b7d2f4b777899d9eff306e2dd21a4" gracePeriod=600 Jan 28 12:42:43 crc kubenswrapper[4804]: I0128 12:42:43.324989 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="3d962111bbfc2c2fbb1c4945c75c4824c37b7d2f4b777899d9eff306e2dd21a4" exitCode=0 Jan 28 12:42:43 crc kubenswrapper[4804]: I0128 12:42:43.325081 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"3d962111bbfc2c2fbb1c4945c75c4824c37b7d2f4b777899d9eff306e2dd21a4"} Jan 28 12:42:43 crc kubenswrapper[4804]: I0128 12:42:43.325369 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"ec3bd661a19a2cd11869dadca6b31f34237816cc3d7caece0577c2a01a50e5db"} Jan 28 12:42:43 crc kubenswrapper[4804]: I0128 12:42:43.325398 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:44:42 crc kubenswrapper[4804]: I0128 12:44:42.582744 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:44:42 crc kubenswrapper[4804]: I0128 12:44:42.583282 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"